Continuing training resets logger epoch #6392
Unanswered
wittenator
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi people,
I am running Pytorch Lightning in a federated learning setting. Therefore I have several models and I need to instantiate a Trainer object for one model multiple times. Every time I do that the associated logger resets the epoch and logs the metrics on top of each other in the plots. Since instantiating a new Trainer object to continue training a model is allowed as far as I know: Do you know if that is expected behaviour and whether there is a workaround?
Beta Was this translation helpful? Give feedback.
All reactions