Skip to content

Conversation

xiaowangintel
Copy link
Collaborator

@xiaowangintel xiaowangintel commented Sep 25, 2025

Summary
This pr add aten._weight_int8pack_mm pass to replace mm + mul in woq-int8 model.

Motivation
Improve performance for woq-int8 inference.

Result:
We can get correct result on Intel GPU.

Copy link

pytorch-bot bot commented Sep 25, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3061

Note: Links to docs will display an error until the docs builds have been completed.

❌ 6 New Failures

As of commit 39d2971 with merge base 5e90c47 (image):

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Sep 25, 2025
@xiaowangintel xiaowangintel added the topic: performance Use this tag if this PR improves the performance of a feature label Sep 25, 2025

# per channel int8 weight only quantizated mm
w_vals_int8_t = weight_tensor.tensor_impl.int_data.t()
w_vals_int8 = weight_tensor.tensor_impl.int_data
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a code path for int8 cuda as well I think, changing it has a risk of perf regressions

also this is the older stack, I'd suggest to migrate first, WIP here for int8 + plain layout: #3038

@xiaowangintel xiaowangintel changed the title Adds _weight_int8pack_mm pass for woq-int8 [WIP]Adds _weight_int8pack_mm pass for woq-int8 Sep 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. topic: performance Use this tag if this PR improves the performance of a feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants