Skip to content

Darkknight0125/Finetuning_Llama

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Finetuning_Llama

Fine-tuning large language models like LLaMA has transformed the way we adapt pre-trained models for specialized tasks. This repository focuses on parameter-efficient fine-tuning techniques such as LoRA and QLoRA to adapt the LLaMA2-7B model to Indian legal text datasets.

Problem Statement

You are tasked with fine-tuning the LLaMA2-7B model on a dataset related to Indian laws to make it capable of generating context-aware legal insights. The challenge is to leverage advanced fine-tuning techniques like LoRA/QLoRA to optimize the training process while keeping computational requirements minimal. Demonstrate your skills in model tuning and deployment!

Instructions

  • Refer to articles, research papers, and official documentation for guidance on techniques and best practices.

  • Do not alter any pre-written code or comments.

  • Write code only in the provided space and document your steps with comments for better understanding.

  • Use Google Colab or similar GPU-enabled environments for training and testing the model.

  • Help

  • For any queries or support, feel free to reach out via email at [email protected] or [email protected] or join the discussion on the project’s Discord server.

Contributions

  • Contributions are welcome! Follow these steps:

  • Fork this repository and clone it to your local device.

  • Work on individual tasks in a separate branch.

  • Push your updates to the forked repo and create a Pull Request (PR).

  • Your PR will be reviewed, and upon approval, merged into the main repository.

Resources

Happy Fine-Tuning!

About

Fine-Tuning LLaMA for Indian Laws

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%