-
Notifications
You must be signed in to change notification settings - Fork 453
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle meta tensors in FX quantization #2622
Conversation
This pull request was exported from Phabricator. Differential Revision: D66895899 |
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Differential Revision: D66895899
This pull request was exported from Phabricator. Differential Revision: D66895899 |
a58d82c
to
1897723
Compare
1897723
to
6d38b7d
Compare
Summary: X-link: pytorch/pytorch#142262 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Differential Revision: D66895899
This pull request was exported from Phabricator. Differential Revision: D66895899 |
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Differential Revision: D66895899
Summary: X-link: pytorch/pytorch#142262 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Differential Revision: D66895899
6d38b7d
to
edd3880
Compare
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Differential Revision: D66895899
This pull request was exported from Phabricator. Differential Revision: D66895899 |
edd3880
to
28bbf9b
Compare
Summary: X-link: pytorch/pytorch#142262 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Differential Revision: D66895899
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Differential Revision: D66895899
This pull request was exported from Phabricator. Differential Revision: D66895899 |
Summary: X-link: pytorch/torchrec#2622 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Test Plan: ``` buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules ``` Reviewed By: emlin Differential Revision: D66895899
Summary: X-link: pytorch/pytorch#142262 If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization. Quantization should also not fail if new quantized module is created on a meta device. If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped. Reviewed By: emlin Differential Revision: D66895899
28bbf9b
to
048a03b
Compare
This pull request was exported from Phabricator. Differential Revision: D66895899 |
Summary:
X-link: pytorch/pytorch#142262
If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.
Quantization should also not fail if new quantized module is created on a meta device.
Differential Revision: D66895899