-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
funtune with lora error #17
Comments
Have you made any modifications to the original released code? If so, please provide more information about what you modified. If not, please give your startup command like here. Thanks. |
I met the same problem. The command is not modified from the case "finetune.lora" in run.sh. |
Could you provide more information about this error, context logs for example. |
Traceback (most recent call last): |
Have you make sure the consistent deepspeed and transformers version with ours? |
I'm having the same problem, using a scenario where I'm using lora to fine-tune training for MobileVLM V2 1.7B VScode Debug Setting:
Dependency version:
Issue logs:
|
please refer commit 688fdec and you can start like: bash run_v1.sh mobilevlm3b finetune.lora ${LANGUAGE_MODEL} ${VISION_MODEL} ${OUTPUT_DIR} |
where can I download mm_projector |
I meets the errors when funtune using loar.
ValueError: Target module LlamaDecoderLayer(
(self_attn): LlamaAttention(
(q_proj): Linear(in_features=2048, out_features=2048, bias=False)
(k_proj): Linear(in_features=2048, out_features=2048, bias=False)
(v_proj): Linear(in_features=2048, out_features=2048, bias=False)
(o_proj): Linear(in_features=2048, out_features=2048, bias=False)
(rotary_emb): LlamaRotaryEmbedding()
)
(mlp): LlamaMLP(
(gate_proj): Linear(in_features=2048, out_features=5632, bias=False)
(up_proj): Linear(in_features=2048, out_features=5632, bias=False)
(down_proj): Linear(in_features=5632, out_features=2048, bias=False)
(act_fn): SiLUActivation()
)
(input_layernorm): LlamaRMSNorm()
(post_attention_layernorm): LlamaRMSNorm()
) is not supported. Currently, only
torch.nn.Linear
andConv1D
are supported.The text was updated successfully, but these errors were encountered: