You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think, the problem is specific in how and when optimizers and schedulers are instantiated. Because I run the above code, but only for batch size, and it worked as expected:
It used the found batch size in training.
For now, as I understand, the way to use LearningRateFinder is to manually define configure_optimizers() in LightningModule. But this way I can't change the optimizer from the yaml config file.
The text was updated successfully, but these errors were encountered:
In this way, pl will execute configure_optimizers after obtaining the optimal LR. Otherwise if we use LRFinder callback, configure_optimizers will not be executed after finding the optimal LR.
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions - the Lightning Team!
Bug description
LearningRateFinder
does not update the optimizer if it is defined from the CLI or yaml config file.For example, I define in
train.yaml
:And I set the callback:
At the start, It finds the best learning rate:
But after that, it still uses the learning rate I provided:
I also tried to do it manually like that:
But I had the same result.
How to reproduce the bug
Error messages and logs
Environment
Current environment
More info
I think, the problem is specific in how and when optimizers and schedulers are instantiated. Because I run the above code, but only for batch size, and it worked as expected:
It used the found batch size in training.
For now, as I understand, the way to use
LearningRateFinder
is to manually defineconfigure_optimizers()
in LightningModule. But this way I can't change the optimizer from the yaml config file.The text was updated successfully, but these errors were encountered: