We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, great work! Thanks for sharing!
When I trained the released codes after feeding the weights and data as provided in readme, I encountered an error as follows:
I did not make any changes on codes, could you please tell me how to fix thix issue?
Thanks a lot!
The text was updated successfully, but these errors were encountered:
peft_config = LoraConfig( task_type=TaskType.CAUSAL_LM, inference_mode=False, r=self.args['lora_r'], lora_alpha=self.args['lora_alpha'], lora_dropout=self.args['lora_dropout'], target_modules=['q_proj', 'k_proj', 'v_proj', 'o_proj'] )
Sorry, something went wrong.
Add the following 2 lines at the very beginning of the current file: from transformers import LlamaTokenizer from peft import *
It is quite time-consuming loading models (almost 6 minutes), any optimizing solutions?
No branches or pull requests
Hi, great work! Thanks for sharing!
When I trained the released codes after feeding the weights and data as provided in readme, I encountered an error as follows:
I did not make any changes on codes, could you please tell me how to fix thix issue?
Thanks a lot!
The text was updated successfully, but these errors were encountered: