-
Notifications
You must be signed in to change notification settings - Fork 182
Support DeepSeekV3-style block FP8 quantization #1607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: mgoin <[email protected]>
👋 Hi! Thank you for contributing to llm-compressor. Please add the ready label when the PR is ready for review. Note: This is required to complete the testing suite, please only add the label once the PR is code complete and local testing has been performed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @mgoin, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for DeepSeekV3-style block FP8 quantization, a technique designed to further compress large language models for more efficient inference. The changes encompass the fundamental implementation of block-wise quantization, robust handling of quantization parameters, updated documentation, and a practical example to guide users in applying this new scheme.
Highlights
- New Quantization Scheme: Introduced support for
W8A8-FP8_BLOCK
quantization. This scheme applies block-wise FP8 quantization to weights (typically in 128x128 tiles) and dynamic per-token-group (128) FP8 quantization for activations. A key benefit is that it does not require a calibration dataset. - Block Quantization Implementation: The core logic for the
BLOCK
quantization strategy has been implemented withinsrc/llmcompressor/observers/base.py
. This involves calculating and storing individual scales and zero points for each defined block within a tensor, replacing a previousNotImplementedError
. - Dynamic Parameter Handling: The
call_observer
function insrc/llmcompressor/modifiers/quantization/calibration.py
has been updated to correctly register and update scale and zero-point parameters. This change specifically addresses the varying shapes of these parameters when block-wise quantization is applied. - Example and Documentation: A new example script (
examples/quantization_w8a8_fp8/fp8_block_example.py
) has been added to demonstrate how to apply theFP8_BLOCK
scheme to a model. Corresponding documentation has been updated indocs/schemes.md
to describe this new quantization method. - Test Coverage: A new test case has been added in
tests/llmcompressor/modifiers/quantization/test_base.py
to ensure that the block quantization configuration, including theblock_structure
parameter, is correctly parsed and resolved by theGPTQModifier
.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for DeepSeekV3-style block FP8 quantization, including the necessary observer logic, calibration handling for dynamic parameter shapes, a new example, and documentation. The changes are well-implemented. My feedback includes suggestions to improve clarity in the documentation, fix inaccuracies in the example script, and refactor a small piece of duplicated code for better maintainability.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding this! We had some users requesting block support in
Couple comments for @kylesayrs , but otherwise LGTM
Signed-off-by: mgoin <[email protected]>
Signed-off-by: mgoin <[email protected]>
…#1608) Summary: - Updates prepare method to no longer require a replace function but just pass in the orignal module directly along with the text config - Add llama4 calibration support - swaps `Llama4TextMoe` with `SequentialLlama4TextMoe` modules - Add llama4 example for NVFP4 and W4A16 Testing - Tested llama4 NVFP4 e2e to produce: `nm-testing/Llama-4-Scout-17B-16E-Instruct-NVFP4` --------- Signed-off-by: Kyle Sayers <[email protected]> Co-authored-by: Dipika Sikka <[email protected]>
Summary - Link to NVFP4 and W4A16 examples
## Purpose ## * Provide utilities for fusing norms and embeddings for SpinQuantModifier ## Changes ## * Implement `center_embeddings` and `fuse_norm_linears` * `center_embeddings` doesn't seem to have an effect (and theoretically shouldn't have an effect, and makes the implementation less resilient), but is implemented by the SpinQuant paper. We can implement the utility here and decide to not use it later ## Testing ## * Add `test_center_embeddings` and `test_fuse_norm_linears` --------- Signed-off-by: Kyle Sayers <[email protected]>
## Purpose ## * Fix models which have tied word embeddings by untying the word embeddings * This was previously thought to have been fixed by `patch_tied_tensors_bug`, but that solution came from a misunderstanding that the mode config was prescriptive, rather than descriptive (that modifying the config would untie the model weights) ## Changes ## * Replace `patch_tied_tensors_bug` with `untie_word_embeddings` * Do no load models with a ranged `tie_word_embeddings` config ## Testing ## * Verified that saved model now has untied weights * Previous tied tensors tests which were failing now pass --------- Signed-off-by: Kyle Sayers <[email protected]>
…#1436) SUMMARY: LLM Compressor docs website enablement using mkdocs. Additionally, docs structure and required pages have been populated as a starting point. Docs are available at https://vllm-project.github.io/llm-compressor/ currently, they will be deployed to https://docs.vllm.ai/projects/ through vLLM's read the docs setup utilizing the .readthedocs.yaml file. To run locally: ```bash pip install -e ./[dev] mkdocs serve ``` To build locally: ```bash pip install -e ./[dev] mkdocs build ``` TEST PLAN: Manual testing --------- Signed-off-by: Mark Kurtz <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Signed-off-by: shanjiaz <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Signed-off-by: shanjiaz <[email protected]>
…project/llm-compressor into support-deepseek-block-fp8
Signed-off-by: shanjiaz <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: shanjiaz <[email protected]>
…project/llm-compressor into support-deepseek-block-fp8
Fixes #1475
Blocked on CT support in neuralmagic/compressed-tensors#372
TEST PLAN:
Run related test locally. This depends on the above compressed tensors PR.