Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Handle meta tensors in FX quantization (pytorch#142262)
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Differential Revision: D66895899
- Loading branch information