LoRA Adapter enabling with vLLM is not working #16604
SSPRATAP17
announced in
Q&A
Replies: 3 comments
-
Can you anyone please update on this? |
Beta Was this translation helpful? Give feedback.
0 replies
-
I can confirm this is happening on v0.8.3 (on ROCm 6.3), traceback below: traceback
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi, observed this on MI300X too.
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I'm trying to enable lora adapter dynamically with vLLM.
Below are the steps that I followed:
**VLLM_ALLOW_RUNTIME_LORA_UPDATING=True python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-3.2-3B-Instruct --served-model-name Llama-3.2-3B-Instruct --enable-lora --max-lora-rank 64**
Below are the logs:
`INFO 04-14 16:15:31 [init.py:239] Automatically detected platform rocm.
WARNING 04-14 16:15:32 [api_server.py:759] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development!
INFO 04-14 16:15:32 [api_server.py:981] vLLM API server version 0.8.3.dev19+g3eb08ed9
INFO 04-14 16:15:32 [api_server.py:982] args: Namespace(host=None, port=8000, uvicorn_log_level='info', disable_uvicorn_access_log=False, allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=[''], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='meta-llama/Llama-3.2-3B-Instruct', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=None, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=True, enable_lora_bias=False, max_loras=1, max_lora_rank=64, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_config=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Llama-3.2-3B-Instruct'], qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_cascade_attn=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False)
INFO 04-14 16:15:44 [config.py:585] This model supports multiple tasks: {'embed', 'generate', 'classify', 'score', 'reward'}. Defaulting to 'generate'.
INFO 04-14 16:15:44 [arg_utils.py:1868] LORA is experimental on VLLM_USE_V1=1. Falling back to V0 Engine.
WARNING 04-14 16:15:44 [arg_utils.py:1744] The model has a long context length (131072). This may causeOOM during the initial memory profiling phase, or result in low performance due to small KV cache size. Consider setting --max-model-len to a smaller value.
INFO 04-14 16:15:44 [config.py:1552] Disabled the custom all-reduce kernel because it is not supported on AMD GPUs.
INFO 04-14 16:15:44 [api_server.py:241] Started engine process with PID 161
INFO 04-14 16:15:47 [init.py:239] Automatically detected platform rocm.
WARNING 04-14 16:15:48 [api_server.py:759] LoRA dynamic loading & unloading is enabled in the API server. This should ONLY be used for local development!
INFO 04-14 16:15:48 [llm_engine.py:241] Initializing a V0 LLM engine (v0.8.3.dev19+g3eb08ed9) with config: model='meta-llama/Llama-3.2-3B-Instruct', speculative_config=None, tokenizer='meta-llama/Llama-3.2-3B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=Llama-3.2-3B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=None, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
INFO 04-14 16:15:54 [rocm.py:131] None is not supported in AMD GPUs.
INFO 04-14 16:15:54 [rocm.py:132] Using ROCmFlashAttention backend.
INFO 04-14 16:15:54 [parallel_state.py:954] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0
INFO 04-14 16:15:54 [model_runner.py:1110] Starting to load model meta-llama/Llama-3.2-3B-Instruct...
INFO 04-14 16:15:55 [weight_utils.py:265] Using model weights format ['.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 50% Completed | 1/2 [00:00<00:00, 1.51it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00, 1.55s/it]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:02<00:00, 1.42s/it]
INFO 04-14 16:15:58 [loader.py:447] Loading weights took 3.02 seconds
INFO 04-14 16:15:58 [punica_selector.py:18] Using PunicaWrapperGPU.
INFO 04-14 16:15:58 [model_runner.py:1146] Model loading took 7.2754 GB and 3.961958 seconds
ERROR 04-14 16:15:59 [engine.py:448] 'Keyword argument maxnreg was specified but unrecognised'
ERROR 04-14 16:15:59 [engine.py:448] Traceback (most recent call last):
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 436, in run_mp_engine
ERROR 04-14 16:15:59 [engine.py:448] engine = MQLLMEngine.from_vllm_config(
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 128, in from_vllm_config
ERROR 04-14 16:15:59 [engine.py:448] return cls(
ERROR 04-14 16:15:59 [engine.py:448] ^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 82, in init
ERROR 04-14 16:15:59 [engine.py:448] self.engine = LLMEngine(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 283, in init
ERROR 04-14 16:15:59 [engine.py:448] self._initialize_kv_caches()
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 432, in _initialize_kv_caches
ERROR 04-14 16:15:59 [engine.py:448] self.model_executor.determine_num_available_blocks())
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 102, in determine_num_available_blocks
ERROR 04-14 16:15:59 [engine.py:448] results = self.collective_rpc("determine_num_available_blocks")
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
ERROR 04-14 16:15:59 [engine.py:448] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/utils.py", line 2255, in run_method
ERROR 04-14 16:15:59 [engine.py:448] return func(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 04-14 16:15:59 [engine.py:448] return func(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks
ERROR 04-14 16:15:59 [engine.py:448] self.model_runner.profile_run()
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 04-14 16:15:59 [engine.py:448] return func(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run
ERROR 04-14 16:15:59 [engine.py:448] self._dummy_run(max_num_batched_tokens, max_num_seqs)
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run
ERROR 04-14 16:15:59 [engine.py:448] self.execute_model(model_input, kv_caches, intermediate_tensors)
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 04-14 16:15:59 [engine.py:448] return func(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1742, in execute_model
ERROR 04-14 16:15:59 [engine.py:448] hidden_or_intermediate_states = model_executable(
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 04-14 16:15:59 [engine.py:448] return self._call_impl(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 04-14 16:15:59 [engine.py:448] return forward_call(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 529, in forward
ERROR 04-14 16:15:59 [engine.py:448] model_output = self.model(input_ids, positions, intermediate_tensors,
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 172, in call
ERROR 04-14 16:15:59 [engine.py:448] return self.forward(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 350, in forward
ERROR 04-14 16:15:59 [engine.py:448] hidden_states = self.get_input_embeddings(input_ids)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 337, in get_input_embeddings
ERROR 04-14 16:15:59 [engine.py:448] return self.embed_tokens(input_ids)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
ERROR 04-14 16:15:59 [engine.py:448] return self._call_impl(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
ERROR 04-14 16:15:59 [engine.py:448] return forward_call(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/layers.py", line 264, in forward
ERROR 04-14 16:15:59 [engine.py:448] self.punica_wrapper.add_lora_embedding(full_output,
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/punica_wrapper/punica_gpu.py", line 176, in add_lora_embedding
ERROR 04-14 16:15:59 [engine.py:448] lora_expand(
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_ops.py", line 1158, in call
ERROR 04-14 16:15:59 [engine.py:448] return self._op(*args, **(kwargs or {}))
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 04-14 16:15:59 [engine.py:448] return func(*args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/ops/triton_ops/lora_expand.py", line 219, in _lora_expand
ERROR 04-14 16:15:59 [engine.py:448] _lora_expand_kernel[grid](
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/triton/runtime/jit.py", line 368, in
ERROR 04-14 16:15:59 [engine.py:448] return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
ERROR 04-14 16:15:59 [engine.py:448] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 04-14 16:15:59 [engine.py:448] File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/triton/runtime/jit.py", line 596, in run
ERROR 04-14 16:15:59 [engine.py:448] raise KeyError("Keyword argument %s was specified but unrecognised" % k)
ERROR 04-14 16:15:59 [engine.py:448] KeyError: 'Keyword argument maxnreg was specified but unrecognised'
Process SpawnProcess-1:
Traceback (most recent call last):
File "/opt/conda/envs/py_3.12/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/opt/conda/envs/py_3.12/lib/python3.12/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 450, in run_mp_engine
raise e
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 436, in run_mp_engine
engine = MQLLMEngine.from_vllm_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 128, in from_vllm_config
return cls(
^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 82, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 283, in init
self._initialize_kv_caches()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/engine/llm_engine.py", line 432, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/executor/executor_base.py", line 102, in determine_num_available_blocks
results = self.collective_rpc("determine_num_available_blocks")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/executor/uniproc_executor.py", line 56, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/utils.py", line 2255, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/worker.py", line 229, in determine_num_available_blocks
self.model_runner.profile_run()
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1243, in profile_run
self._dummy_run(max_num_batched_tokens, max_num_seqs)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1354, in _dummy_run
self.execute_model(model_input, kv_caches, intermediate_tensors)
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/worker/model_runner.py", line 1742, in execute_model
hidden_or_intermediate_states = model_executable(
^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 529, in forward
model_output = self.model(input_ids, positions, intermediate_tensors,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/compilation/decorators.py", line 172, in call
return self.forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 350, in forward
hidden_states = self.get_input_embeddings(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/model_executor/models/llama.py", line 337, in get_input_embeddings
return self.embed_tokens(input_ids)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/layers.py", line 264, in forward
self.punica_wrapper.add_lora_embedding(full_output,
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/punica_wrapper/punica_gpu.py", line 176, in add_lora_embedding
lora_expand(
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/_ops.py", line 1158, in call
return self._op(*args, **(kwargs or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/lora/ops/triton_ops/lora_expand.py", line 219, in _lora_expand
_lora_expand_kernel[grid](
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/triton/runtime/jit.py", line 368, in
return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/triton/runtime/jit.py", line 596, in run
raise KeyError("Keyword argument %s was specified but unrecognised" % k)
KeyError: 'Keyword argument maxnreg was specified but unrecognised'
[rank0]:[W414 16:15:59.861372715 ProcessGroupNCCL.cpp:1477] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1066, in
uvloop.run(run_server(args))
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/uvloop/init.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 1016, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 141, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/py_3.12/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 264, in build_async_engine_client_from_engine_args
raise RuntimeError(
`
vLLM version: 0.8.3.dev19+g3eb08ed9.rocm634
Please let me know If I'm missing anything here?
Beta Was this translation helpful? Give feedback.
All reactions