Skip to content

Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)

Notifications You must be signed in to change notification settings

TianyiPeng/Colab_for_Alpaca_Lora

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Colab for Alpaca LoRA

If you're looking to fine-tune a ChatGPT-level model but lack access to a GPU, Google Colab may be a useful solution to consider.

With a Google Colab Pro account, you can access a single 40GB A100 GPU ($10 for approximately 7.5 hours) or Tesla T4 GPU ($10 for approximately 50 hours), and sometimes these resources are available for free.

Here is a Google Colab Notebook Example for fine-tuning Alpaca Lora (within 2-3 hours with a single 40GB A100 GPU). In particular, Stanford Alpaca is a fine-tuning version of Meta LLaMA (a large lanuage model with tens of billions parameters) based on a small instruction set. Alpaca LoRA used low-rank adaptation (LoRA) to optimize the speed and resource required for reproducing Alpaca.

This notebook just serves as an example. Check Stanford Alpaca and Alpaca LoRA for more usage instructions.

Resources

About

Here is a Google Colab Notebook for fine-tuning Alpaca Lora (within 3 hours with a 40GB A100 GPU)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published