Skip to content

[Feature]: support for fp8 marlin with MoE #17579

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 task done
ehartford opened this issue May 2, 2025 · 3 comments
Open
1 task done

[Feature]: support for fp8 marlin with MoE #17579

ehartford opened this issue May 2, 2025 · 3 comments
Labels
feature request New feature or request

Comments

@ehartford
Copy link
Contributor

🚀 The feature, motivation and pitch

I wanna run Qwen3-235B-A22B on Ampere (A100) in fp8.

I quantized it to w8a16 using llm-compressor

https://huggingface.co/cognitivecomputations/Qwen3-235B-A22B-FP8-W8A16

but when I run it, I get the error

ERROR 05-02 03:16:53 [multiproc_executor.py:435] AssertionError: float16 is required for MoE compressed models. Set dtype=torch.float16

Please support FP8 with MoE in Marlin kernel

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@ehartford ehartford added the feature request New feature or request label May 2, 2025
@zhangxiaoxing
Copy link

zhangxiaoxing commented May 2, 2025

I guess the following error is also related? It also happened on running FP8 MoE modal with Ampere cards. Qwen3 on FP8 is nice to have, and llama4 on FP8 could be even more interesting due to its potentially larger context length.

(VllmWorker rank=1 pid=61141) ERROR 05-02 09:37:16 [multiproc_executor.py:470] triton.compiler.errors.CompilationError: at 1:0:
(VllmWorker rank=1 pid=61141) ERROR 05-02 09:37:16 [multiproc_executor.py:470] def _per_token_group_quant_fp8(
(VllmWorker rank=1 pid=61141) ERROR 05-02 09:37:16 [multiproc_executor.py:470] ^                                                                                                                

(VllmWorker rank=1 pid=61141) ERROR 05-02 09:37:16 [multiproc_executor.py:470] ValueError("type fp8e4nv not supported in this architecture. The supported fp8 dtypes are ('fp8e4b15', 'fp8e5')")
(VllmWorker rank=1 pid=61141) ERROR 05-02 09:37:16 [multiproc_executor.py:470]                                                                                                                  

@sparkprime
Copy link

Is there any workaround for running 235b on a100? A different quantization maybe?

@ehartford
Copy link
Contributor Author

AWQ should work theoretically

I haven't seen anyone quant this yet

Personally I'm working on it too.

But that also is failing for me.

vllm-project/llm-compressor#1406

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants