Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] 'MiniCPMO' object has no attribute 'apm'. #781

Closed
2 tasks done
lwj2001 opened this issue Jan 22, 2025 · 3 comments
Closed
2 tasks done

[BUG] 'MiniCPMO' object has no attribute 'apm'. #781

lwj2001 opened this issue Jan 22, 2025 · 3 comments

Comments

@lwj2001
Copy link

lwj2001 commented Jan 22, 2025

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

After i use lora-tuning in docs/llamafactory_train_and_infer.md to train the model with mllm_demo.json dataset, run the script below:
python web_demos/minicpm-o_2.6/model_server.py
counter a bug with that:

Traceback (most recent call last):
  File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 601, in <module>
    stream_manager = StreamManager()
  File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 127, in __init__
    self.sys_prompt_init(0)
  File "/Data3/liwenjie/bishe/MiniCPM-o/web_demos/minicpm-o_2.6/model_server.py", line 264, in sys_prompt_init
    self.minicpmo_model.streaming_prefill(
  File "/Data1/home/fanziqi/.conda/envs/MiniCPMo/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 1133, in streaming_prefill
    inputs_embeds = self.get_omni_embedding(
  File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 574, in get_omni_embedding
    audio_embeddings = self.get_audio_embedding_streaming(data)
  File "/Data1/home/fanziqi/.cache/huggingface/modules/transformers_modules/minicpmo_2_6_lora_sft_default/modeling_minicpmo.py", line 443, in get_audio_embedding_streaming
    audio_outputs = self.apm(wavforms, past_key_values=self.audio_past_key_values, use_cache=True)
  File "/Data1/home/fanziqi/.conda/envs/MiniCPMo/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1709, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'MiniCPMO' object has no attribute 'apm'. Did you mean: 'vpm'?

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

The lora train yaml and export yaml have NOT changed
web chat is normal. But Official Code counter a bug.

运行环境 | Environment

- OS:Ubuntu 20.04
- Python:3.10
- Transformers:4.48.1 because the version 4.44.2 counter a bug with "data did not match any variant of untagged enum ModelWrapper at line 757767 column 3"
- PyTorch:2.3.1
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):12.1

备注 | Anything else?

No response

@lwj2001 lwj2001 changed the title [BUG] 'MiniCPMO' object has no attribute 'apm'. <title> [BUG] 'MiniCPMO' object has no attribute 'apm'. Jan 22, 2025
@YuzaChongyi
Copy link
Collaborator

You can verify whether the training includes the apm (audio encoder). If the apm was not loaded during training, the saved model might not encompass this part.

@lwj2001
Copy link
Author

lwj2001 commented Jan 23, 2025

You can verify whether the training includes the apm (audio encoder). If the apm was not loaded during training, the saved model might not encompass this part.

hi, i see this message in training:
[INFO|2025-01-23 09:59:42] llamafactory.model.model_utils.visual:157 >> Set vision model not trainable: ['vpm', 'apm', 'resampler', 'tts'].
and i how to save the apm module to trained model ?

@BUAADreamer
Copy link
Contributor

BUAADreamer commented Jan 23, 2025

For now, you can change code in llamafactory to solve.

  • Modify: set L115 and L116 in src/llamafactory/model/patcher.py to True
  • Note: And with this change, if you want to do full parameter sft, use ds2
    if getattr(config, "model_type", None) == "minicpmo":
        setattr(config, "init_audio", True)
        setattr(config, "init_tts", True)

@Cuiunbo Cuiunbo closed this as completed Feb 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants