confusions about torchmetrics in pytorch_lightning #19358
Unanswered
KasuganoLove
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
According to:
We are recommended to instantiate three torchmetrics (including test) when logging the metric object and letting Lightning take care of when to reset the metric etc. Here is the official code (without test):
My question is:
torchmetrics.reset()
on_train_epoch_end
,on_validation_epoch_end
andon_test_epoch_end
, we only need one torchmetric to calculate all, is that right?torchmetrics.forward()
to calculate the metrics of the inputs, the internal state doen't matter (eventorchmetrics.reset()
is redundant), is that right?torchmetrics
with the internal state (likeFID
scores), sincequestion 1
, we only need one torchmetric to calculate all, is that right?torchmetrics.compute()
in my second paragraph of code still works properly underddp
mode ?Here is my code for torchmetrics which just use
torchmetrics.forward()
to calculate the metrics of the inputs:Here is my code for torchmetrics with useful internal state (like
FID
scores):Beta Was this translation helpful? Give feedback.
All reactions