Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

是否支持lora微调visual model #856

Open
HiXxxSss opened this issue Feb 26, 2025 · 0 comments
Open

是否支持lora微调visual model #856

HiXxxSss opened this issue Feb 26, 2025 · 0 comments

Comments

@HiXxxSss
Copy link

您好,当我希望使用lora微调visual model时,发生报错。具体来说,我在Qwen2_5_VLForConditionalGeneration定义下面的方法:

def wrap_visual_lora(self, r=128, lora_alpha=256, lora_dropout=0.05):
    lora_config = LoraConfig(
        r=r,
        target_modules=['attn.qkv', 'attn.proj', 'mlp.gate_proj', 'mlp.up_proj', 'mlp.down_proj'],
        lora_alpha=lora_alpha,
        lora_dropout=lora_dropout,
    )
    self.visual = get_peft_model(self.visual, lora_config)
    self.visual.enable_input_require_grads()
    self.visual.print_trainable_parameters()

但是报错如下。似乎是因为找不到visual模型的get_input_embeddings方法。有没有其他lora微调visual部分的办法呢?

Traceback (most recent call last):
  File "/DATA/workshop/personal/Qwen2_5_VL/qwen2_5_vl/train/train.py", line 379, in <module>
    main()
  File "/DATA/workshop/personal/Qwen2_5_VL/qwen2_5_vl/train/train.py", line 324, in main
    model.wrap_visual_lora(r=model_args.use_visual_lora, lora_alpha=2 * model_args.use_visual_lora)
  File "/DATA/workshop/personal/Qwen2_5_VL/qwen2_5_vl/model/modeling_qwen2_5_vlForClassification.py", line 241, in wrap_visual_lora
    self.visual = get_peft_model(self.visual, lora_config)
  File "/usr/local/lib/python3.10/dist-packages/peft/mapping.py", line 95, in get_peft_model
    return PeftModel(model, peft_config, adapter_name=adapter_name)
  File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 120, in __init__
    model = self._prepare_model_for_gradient_checkpointing(model)
  File "/usr/local/lib/python3.10/dist-packages/peft/peft_model.py", line 320, in _prepare_model_for_gradient_checkpointing
    model.enable_input_require_grads()
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 1883, in enable_input_require_grads
    self._require_grads_hook = self.get_input_embeddings().register_forward_hook(make_inputs_require_grads)
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 1902, in get_input_embeddings
    raise NotImplementedError
NotImplementedError
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant