16-mixed precision returns nan when multiplying tensors #18725
Unanswered
mshooter
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 1 comment
-
up! Anything new on this? I had the same issue. "BmmBackward0 on its 1th output too". When switching to float32 the problem was gone. I am hoping this is something to do with the lightning debugging tool and not the actual backward pass. Let me know what you found. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
when I multiply my tensors, one containing zeros using torch.matmul I get the following error when using 16-mixed precision.
RuntimeError: Function 'BmmBackward0' returned nan values in its 1th output.
can someone explain why is that?
I am using the latest pytorch-lightning version.
Beta Was this translation helpful? Give feedback.
All reactions