Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[vllm] - 加载openbmb/MiniCPM-V-2_6-int4模型报错(layers.0.mlp.down_proj.weight.absmax) #789

Open
lsky-walt opened this issue Jan 23, 2025 · 0 comments
Labels
question Further information is requested

Comments

@lsky-walt
Copy link

起始日期 | Start Date

2025-01-23

实现PR | Implementation PR

No response

相关Issues | Reference Issues

No response

摘要 | Summary

运行命令:docker run -d -it --name minicpm --runtime nvidia --gpus all -p 4000:8000 --ipc=host -v /home/xxx/models/MiniCPM-V-2_6-int4/:/mnt/model/ --env "HF_DATASET_OFFLINE=1" vllm/vllm-openai:v0.6.6.post1 --model /mnt/model/ --gpu_memory_utilization 1 --max-model-len 4096 --max-num-seqs 32 --trust-remote-code

报错概要:KeyError: 'layers.0.mlp.down_proj.weight.absmax'

基本示例 | Basic Example

docker run -d -it --name minicpm --runtime nvidia --gpus all -p 4000:8000 --ipc=host -v /home/xxx/models/MiniCPM-V-2_6-int4/:/mnt/model/ --env "HF_DATASET_OFFLINE=1" vllm/vllm-openai:v0.6.6.post1 --model /mnt/model/ --gpu_memory_utilization 1 --max-model-len 4096 --max-num-seqs 32 --trust-remote-code


运行环境

Image

缺陷 | Drawbacks

INFO 01-22 23:09:41 selector.py:120] Using Flash Attention backend.
INFO 01-22 23:09:42 model_runner.py:1094] Starting to load model /mnt/model/...
INFO 01-22 23:09:42 selector.py:249] Cannot use FlashAttention-2 backend for head size 72.
INFO 01-22 23:09:42 selector.py:129] Using XFormers backend.
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
ERROR 01-22 23:09:42 engine.py:366] 'layers.0.mlp.down_proj.weight.absmax'
ERROR 01-22 23:09:42 engine.py:366] Traceback (most recent call last):
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
ERROR 01-22 23:09:42 engine.py:366]     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-22 23:09:42 engine.py:366]              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
ERROR 01-22 23:09:42 engine.py:366]     return cls(ipc_path=ipc_path,
ERROR 01-22 23:09:42 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
ERROR 01-22 23:09:42 engine.py:366]     self.engine = LLMEngine(*args, **kwargs)
ERROR 01-22 23:09:42 engine.py:366]                   ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
ERROR 01-22 23:09:42 engine.py:366]     self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-22 23:09:42 engine.py:366]                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
ERROR 01-22 23:09:42 engine.py:366]     self._init_executor()
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/executor/gpu_executor.py", line 35, in _init_executor
ERROR 01-22 23:09:42 engine.py:366]     self.driver_worker.load_model()
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in load_model
ERROR 01-22 23:09:42 engine.py:366]     self.model_runner.load_model()
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096, in load_model
ERROR 01-22 23:09:42 engine.py:366]     self.model = get_model(vllm_config=self.vllm_config)
ERROR 01-22 23:09:42 engine.py:366]                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
ERROR 01-22 23:09:42 engine.py:366]     return loader.load_model(vllm_config=vllm_config)
ERROR 01-22 23:09:42 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 366, in load_model
ERROR 01-22 23:09:42 engine.py:366]     loaded_weights = model.load_weights(
ERROR 01-22 23:09:42 engine.py:366]                      ^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/minicpmv.py", line 592, in load_weights
ERROR 01-22 23:09:42 engine.py:366]     return loader.load_weights(weights)
ERROR 01-22 23:09:42 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 237, in load_weights
ERROR 01-22 23:09:42 engine.py:366]     autoloaded_weights = set(self._load_module("", self.module, weights))
ERROR 01-22 23:09:42 engine.py:366]                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 198, in _load_module
ERROR 01-22 23:09:42 engine.py:366]     yield from self._load_module(prefix,
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 175, in _load_module
ERROR 01-22 23:09:42 engine.py:366]     loaded_params = module_load_weights(weights)
ERROR 01-22 23:09:42 engine.py:366]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 506, in load_weights
ERROR 01-22 23:09:42 engine.py:366]     return loader.load_weights(weights)
ERROR 01-22 23:09:42 engine.py:366]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 237, in load_weights
ERROR 01-22 23:09:42 engine.py:366]     autoloaded_weights = set(self._load_module("", self.module, weights))
ERROR 01-22 23:09:42 engine.py:366]                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 198, in _load_module
ERROR 01-22 23:09:42 engine.py:366]     yield from self._load_module(prefix,
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 175, in _load_module
ERROR 01-22 23:09:42 engine.py:366]     loaded_params = module_load_weights(weights)
ERROR 01-22 23:09:42 engine.py:366]                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-22 23:09:42 engine.py:366]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 393, in load_weights
ERROR 01-22 23:09:42 engine.py:366]     param = params_dict[name]
ERROR 01-22 23:09:42 engine.py:366]             ~~~~~~~~~~~^^^^^^
ERROR 01-22 23:09:42 engine.py:366] KeyError: 'layers.0.mlp.down_proj.weight.absmax'
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.12/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 368, in run_mp_engine
    raise e
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 357, in run_mp_engine
    engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 119, in from_engine_args
    return cls(ipc_path=ipc_path,
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/multiprocessing/engine.py", line 71, in __init__
    self.engine = LLMEngine(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/engine/llm_engine.py", line 273, in __init__
    self.model_executor = executor_class(vllm_config=vllm_config, )
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/executor_base.py", line 36, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.12/dist-packages/vllm/executor/gpu_executor.py", line 35, in _init_executor
    self.driver_worker.load_model()
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/worker.py", line 155, in load_model
    self.model_runner.load_model()
  File "/usr/local/lib/python3.12/dist-packages/vllm/worker/model_runner.py", line 1096, in load_model
    self.model = get_model(vllm_config=self.vllm_config)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/__init__.py", line 12, in get_model
    return loader.load_model(vllm_config=vllm_config)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/model_loader/loader.py", line 366, in load_model
    loaded_weights = model.load_weights(
                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/minicpmv.py", line 592, in load_weights
    return loader.load_weights(weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 237, in load_weights
    autoloaded_weights = set(self._load_module("", self.module, weights))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 198, in _load_module
    yield from self._load_module(prefix,
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 175, in _load_module
    loaded_params = module_load_weights(weights)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 506, in load_weights
    return loader.load_weights(weights)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 237, in load_weights
    autoloaded_weights = set(self._load_module("", self.module, weights))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 198, in _load_module
    yield from self._load_module(prefix,
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/utils.py", line 175, in _load_module
    loaded_params = module_load_weights(weights)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/qwen2.py", line 393, in load_weights
    param = params_dict[name]
            ~~~~~~~~~~~^^^^^^
KeyError: 'layers.0.mlp.down_proj.weight.absmax'
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]

[rank0]:[W122 23:09:43.089645582 ProcessGroupNCCL.cpp:1250] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present,  but this warning has only been added since PyTorch 2.4 (function operator())
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 774, in <module>
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 109, in run
    return __asyncio.run(
           ^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 194, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
           ^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 740, in run_server
    async with build_async_engine_client(args) as engine_client:
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 118, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 223, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.

未解决问题 | Unresolved questions

No response

@lsky-walt lsky-walt added the question Further information is requested label Jan 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant