Knowing that
learning_rate = 0.0004
optimizer = torch.optim.Adam(
model.parameters(),
lr=learning_rate, betas=(0.5, 0.999)
)
is there a way of decaying the learning rate from the 100th epoch?
Is this a good practice:
decayRate = 0.96
my_lr_scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer=my_optimizer, gamma=decayRate)
Please refer: MultiStepLR for more information.