You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
我已经搜索过FAQ | I have searched FAQ
当前行为 | Current Behavior
After i use lora-tuning in docs/llamafactory_train_and_infer.md to train the model with mllm_demo.json dataset, run the script below: python web_demos/minicpm-o_2.6/model_server.py
counter a bug with that:
Traceback (most recent call last):
File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 601, in <module>
stream_manager = StreamManager()
File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 127, in __init__
self.sys_prompt_init(0)
File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 264, in sys_prompt_init
self.minicpmo_model.streaming_prefill(
File "/Data1/home/fanziqi/.conda/envs/MiniCPMo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 1133, in streaming_prefill
inputs_embeds = self.get_omni_embedding(
File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 574, in get_omni_embedding
audio_embeddings = self.get_audio_embedding_streaming(data)
File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 443, in get_audio_embedding_streaming
audio_outputs = self.apm(wavforms, past_key_values=self.audio_past_key_values, use_cache=True)
File "/Data1/home/fanziqi/.conda/envs/MiniCPMo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1709, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'MiniCPMO' object has no attribute 'apm'. Did you mean: 'vpm'?
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
The lora train yaml and export yaml have NOT changed
web chat is normal. But Official Code counter a bug.
运行环境 | Environment
- OS:Ubuntu 20.04
- Python:3.10
- Transformers:4.48.1 because the version 4.44.2 counter a bug with "data did not match any variant of untagged enum ModelWrapper at line 757767 column 3"
- PyTorch:2.3.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered:
lwj2001
changed the title
[BUG] 'MiniCPMO' object has no attribute 'apm'. <title>
[BUG] 'MiniCPMO' object has no attribute 'apm'.
Jan 22, 2025
You can verify whether the training includes the apm (audio encoder). If the apm was not loaded during training, the saved model might not encompass this part.
You can verify whether the training includes the apm (audio encoder). If the apm was not loaded during training, the saved model might not encompass this part.
hi, i see this message in training: [INFO|2025-01-23 09:59:42] llamafactory.model.model_utils.visual:157 >> Set vision model not trainable: ['vpm', 'apm', 'resampler', 'tts'].
and i how to save the apm module to trained model ?
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
After i use lora-tuning in docs/llamafactory_train_and_infer.md to train the model with
mllm_demo.json
dataset, run the script below:python web_demos/minicpm-o_2.6/model_server.py
counter a bug with that:
期望行为 | Expected Behavior
No response
复现方法 | Steps To Reproduce
The lora train yaml and export yaml have NOT changed
web chat is normal. But Official Code counter a bug.
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: