-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[vllm] - AttributeError: '_OpNamespace' '_vllm_fa2_C' object has no attribute 'varlen_fwd' #800
Comments
这看起来这报错和我们对 MiniCPM-o-2_6 的前端适配没有关系...可能需要检查一下 CUDA 的版本。 |
@HwwwwwwwH I'm also facing the same Issue with your fork. Edit 2: I followed the following steps first:
Then I got this error:
Here's is the full error trace. I also tried using official vLLM since I noticed that Command:
Error:
Edit 1:
Command: Same Error:
Edit 2: |
|
还有更多报错的 traceback 吗? |
您好,以下是具体的报错的traceback: |
可以更新一下最新HF仓库的代码。 |
你好,解决了谢谢~ |
执行:
vllm serve /DATA/disk0/ld/ld_model_pretrain/MiniCPM-o-2_6 --dtype auto --max-model-len 2048 --api-key token-abc123 --gpu_memory_utilization 1 --trust-remote-code
报错如下:
ERROR 01-26 11:32:37 engine.py:380] File "/autodl-fs/data/github/vllm/vllm/vllm_flash_attn/flash_attn_interface.py", line 154, in flash_attn_varlen_func
ERROR 01-26 11:32:37 engine.py:380] out, softmax_lse = torch.ops._vllm_fa2_C.varlen_fwd(
AttributeError: '_OpNamespace' '_vllm_fa2_C' object has no attribute 'varlen_fwd'
官网的教程竟然也会报错!
官网的教程竟然也会报错!
官网的教程竟然也会报错!
The text was updated successfully, but these errors were encountered: