-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow compressed-tensors quantized model to be trained #34520
base: main
Are you sure you want to change the base?
Allow compressed-tensors quantized model to be trained #34520
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR ! Left a few suggestion. Could you explain a bit more how you are performing training with compressed-tensors models if you are not using peft ? Are you maybe doing qat or just adding custom lora layers by yourself ?
@SunMarc We are not using LoRA adapters |
…ithub.com:neuralmagic/upstream-transformers into nm-train-quantized-models-from-compressed-tensors
What does this PR do?
Using HFQuantizer, models that were quantized using
compressed-tensors
can be loaded.Here, we fix the problem where the above models can also be trained.
Who can review?
@SunMarc @younesbelkada