Skip to content

seffylmz/LLM-Finetuning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-Finetuning

PEFT Fine-Tuning Project 🚀

Welcome to the PEFT (Pretraining-Evaluation Fine-Tuning) project repository! This project focuses on efficiently fine-tuning large language models using LoRA and Hugging Face's transformers library.

Fine Tuning Notebook Table 📑

Notebook Title Description Colab Badge
1. Efficiently Train Large Language Models with LoRA and Hugging Face Details and code for efficient training of large language models using LoRA and Hugging Face. Open in Colab
2. Fine-Tune Your Own Llama 2 Model in a Colab Notebook Guide to fine-tuning your Llama 2 model using Colab. Open in Colab
3. Guanaco Chatbot Demo with LLaMA-7B Model Showcase of a chatbot demo powered by LLaMA-7B model. Open in Colab
4. PEFT Finetune-Bloom-560m-tagger Project details for PEFT Finetune-Bloom-560m-tagger. Open in Colab
5. Finetune_Meta_OPT-6-1b_Model_bnb_peft Details and guide for finetuning the Meta OPT-6-1b Model using PEFT and Bloom-560m-tagger. Open in Colab
6.Finetune Falcon-7b with BNB Self Supervised Training Guide for finetuning Falcon-7b using BNB self-supervised training. Open in Colab
7.FineTune LLaMa2 with QLoRa Guide to fine-tune the Llama 2 7B pre-trained model using the PEFT library and QLoRa method Open in Colab
8.Stable_Vicuna13B_8bit_in_Colab Guide of Fine Tuning Vecuna 13B_8bit Open in Colab
9. GPT-Neo-X-20B-bnb2bit_training Guide How to train the GPT-NeoX-20B model using bfloat16 precision Open in Colab
10. MPT-Instruct-30B Model Training MPT-Instruct-30B is a large language model from MosaicML that is trained on a dataset of short-form instructions. It can be used to follow instructions, answer questions, and generate text. Open in Colab
11.RLHF_Training_for_CustomDataset_for_AnyModel How train a Model with RLHF training on any LLM model with custom dataset Open in Colab
12.Fine_tuning_Microsoft_Phi_1_5b_on_custom_dataset(dialogstudio) How train a model with trl SFT Training on Microsoft Phi 1.5 with custom Open in Colab
13. Finetuning OpenAI GPT3.5 Turbo How to finetune GPT 3.5 on your own data Open in Colab
14. Finetuning Mistral-7b FineTuning Model using Autotrain-advanced How to finetune Mistral-7b using autotrained-advanced Open in Colab
15. RAG LangChain Tutorial How to Use RAG using LangChain Open in Colab
16. Knowledge Graph LLM with LangChain PDF Question Answering How to build knowledge graph with pdf question answering Open in Colab
17. Text to Knolwedge Graph with OpenAI Function with Neo4j and Langchain Agent Question Answering How to build knowledge graph from text or Pdf Document with pdf question Answering Open in Colab
18. Convert the Document to Knowledgegraph using Langchain and Openai This notebook is help you to understand how easiest way you can convert your any documents into Knowledgegraph for your next RAG based Application Open in Colab
19. How to train a 1-bit Model with LLMs? This notebook is help you to train a model with 1-bit and 2-bit quantization method using hqq framework Open in Colab

Contributing 🤝

Contributions are welcome! If you'd like to contribute to this project, feel free to open an issue or submit a pull request.

License 📝

This project is licensed under the MIT License.


Created with ❤️ by Ashish

About

LLM Finetuning with peft

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%