Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle meta tensors in FX quantization #2622

Closed
wants to merge 1 commit into from

Conversation

kausv
Copy link
Contributor

@kausv kausv commented Dec 10, 2024

Summary:
X-link: pytorch/pytorch#142262

If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

Differential Revision: D66895899

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 10, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

kausv added a commit to kausv/pytorch that referenced this pull request Dec 10, 2024
Summary:
X-link: pytorch/torchrec#2622


If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
```

Differential Revision: D66895899
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

kausv added a commit to kausv/torchrec that referenced this pull request Dec 19, 2024
Summary:

X-link: pytorch/pytorch#142262

If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Differential Revision: D66895899
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

kausv added a commit to kausv/pytorch that referenced this pull request Dec 19, 2024
Summary:
X-link: pytorch/torchrec#2622


If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
```

Differential Revision: D66895899
kausv added a commit to kausv/torchrec that referenced this pull request Dec 19, 2024
Summary:

X-link: pytorch/pytorch#142262

If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Differential Revision: D66895899
kausv added a commit to kausv/pytorch that referenced this pull request Dec 19, 2024
Summary:
X-link: pytorch/torchrec#2622


If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
```

Differential Revision: D66895899
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

kausv added a commit to kausv/torchrec that referenced this pull request Dec 21, 2024
Summary:

X-link: pytorch/pytorch#142262

If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Differential Revision: D66895899
kausv added a commit to kausv/pytorch that referenced this pull request Dec 21, 2024
Summary:
X-link: pytorch/torchrec#2622


If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
```

Differential Revision: D66895899
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

kausv added a commit to kausv/pytorch that referenced this pull request Dec 21, 2024
Summary:
X-link: pytorch/torchrec#2622


If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Test Plan:
```
buck run fbcode//mode/dev-nosan fbcode//torchrec/fb/quant/tests:test_embedding_modules
```

Reviewed By: emlin

Differential Revision: D66895899
Summary:

X-link: pytorch/pytorch#142262

If module being quantized contains a some meta tensors and some tensors with actual device, we should not fail quantization.

Quantization should also not fail if new quantized module is created on a meta device.

If devices contain meta, copying from meta to meta is not necessary, copying from another device to meta can be skipped.

Reviewed By: emlin

Differential Revision: D66895899
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D66895899

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants