This project consists of an interactive chatbot (IITM Infobot
) and a FastAPI backend. The chatbot helps students with questions about the IITM BS degree program by providing information on courses, the application process, and general advice.
frontend/
: Contains the Chainlit-based chatbot frontend.backend/
: Contains the FastAPI backend and related components.
-
Clone the Repository:
git clone https://github.com/Eyuvaraj/LLM-Hackathon-IITM.git cd LLM-Hackathon-IITM
-
Environment Variables:
- Copy and Rename
.env-sample
to.env
inbackend/
directory and add your API keys and settings:HF_TOKEN=your_hf_token_here GROQ_API_KEY=your_groq_api_key_here NOMIC_API_KEY=your_nomic_api_key_here base_url=local_llm_url api_key=local_llm_api_key dev=True
- Copy and Rename
-
Navigate to the Backend Directory:
cd backend
-
Create and Activate a Virtual Environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install Backend Dependencies:
pip install -r requirements.txt
-
Start the FastAPI Backend:
uvicorn api:app --reload --port 5000
-
Navigate to the Frontend Directory:
cd frontend
-
Create and Activate a Virtual Environment (optional but recommended):
python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`
-
Install Frontend Dependencies:
pip install -r requirements.txt
-
Start the Chainlit Chatbot:
chainlit run app.py -w
You can also use Docker to run the backend and frontend components.
-
Navigate to the Backend Directory:
cd backend
-
Build Docker Image:
docker build -t IITM-bot-backend .
-
Run Docker Container:
docker run -p 5000:5000 IITM-bot-backend
-
Navigate to the Frontend Directory:
cd frontend
-
Build Docker Image:
docker build -t IITM-bot-frontend .
-
Run Docker Container:
docker run -p 8000:8000 IITM-bot-frontend
Once the backend and chatbot are running, you can interact with the IITM_BOT by navigating to the Chainlit interface in your web browser at http://localhost:8000/. The bot will assist you with queries related to the IITM BS degree program.
Due to limitations in handling context length in the provided API, I've opted to integrate with GROQ lamma70b instead. But meta-llama/Meta-Llama-3-8B-Instruct can be used by setting the dev
variable in .env
to False
.
if dev
:
Model: llama3-70b-8192
RAG score filter: 0.7
top_K: 10
else:
Model: Meta-Llama-3-8B-Instruct
RAG score filter: 0.5
top_K: 2
The project uses the following embedding files:
backend/embeddings.py
: Contains the code for generating embeddings and upserting embeddings into chromaDB.backend/test_embeddings.py
: Contains the code for testing the embeddings through the terminal.