Decaying the learning rate from the 100th epoch

319 views Asked by At

Knowing that

learning_rate = 0.0004
optimizer = torch.optim.Adam(
    model.parameters(),
    lr=learning_rate, betas=(0.5, 0.999)
)

is there a way of decaying the learning rate from the 100th epoch?

Is this a good practice:

decayRate = 0.96
my_lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=my_optimizer, gamma=decayRate)
1

There are 1 answers

0
Harish Vutukuri On
from torch.optim.lr_scheduler import MultiStepLR

# reduce the learning rate by 0.1 after epoch 100
scheduler = MultiStepLR(optimizer, milestones=[100,], gamma=0.1)

Please refer: MultiStepLR for more information.