-
Notifications
You must be signed in to change notification settings - Fork 84
Issues: johnsmith0031/alpaca_lora_4bit
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Why lora support is only for simple lora with only q_proj and v_proj ?
#155
opened Mar 4, 2024 by
XpracticeYSKM
Is alpaca_lora_4bit@winglian-setup_pip missing finetune.py?
#143
opened Jul 23, 2023 by
tensiondriven
Gibberish results for non-disabled "faster_mode" using "vicuna-7B-GPTQ-4bit-128g" model
#127
opened Jun 26, 2023 by
alex4321
this repo support 2bit finetuning the llama model? Is there any case to show how to run the scripts?
#122
opened Jun 19, 2023 by
zlh1992
ValueError: Autograd4bitQuantLinear() does not have a parameter or a buffer named qzeros.
#105
opened May 17, 2023 by
ra-MANUJ-an
Previous Next
ProTip!
Adding no:label will show everything without a label.