This repository reimplements key experiments from the paper "LORA: Low-Rank Adaptation of Large Language Models" by Hu et al. (2021). In particular, we focus on the experiments in the paper that demonstrate the effectiveness of the LORA method in adapting large language models to new tasks with limited data. We reimplement the experiments on RoBERTa and GPT-2 models using the Hugging Face Transformers library. Furthmore, we add extended experiments dealing with quantization, image classification, and rank/task relationships. Each experiment is implemented in a separate Jupyter notebook.
-
Notifications
You must be signed in to change notification settings - Fork 0
Qrauli/NLP_Lora
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published