This project demonstrates building a simple yet scalable LLM chat application using microservices architecture and Docker for containerization. It provides a user-friendly web interface to interact with OpenAI's GPT models through a chat interface.
-
Frontend Service
- Built with Gradio for an interactive chat interface
- Runs on port 7860
- Communicates with the backend service
-
Backend Service
- Built with FastAPI
- Handles API requests to OpenAI
- Runs on port 8000
- Requires OpenAI API key
- Docker and Docker Compose installed on your system
- OpenAI API key
- Create a
.env
file in the root directory with:OPENAI_API_KEY=your_api_key_here
-
Clone the repository:
git clone <repository-url> cd <repository-directory>
-
Create the
.env
file with your OpenAI API key as shown above -
Build and start the containers:
docker-compose up --build
-
Access the application:
- Open your browser and go to:
http://localhost:7860
- The chat interface will be ready to use
- Open your browser and go to:
- The frontend will be available at:
http://localhost:7860
- The backend API will be available at:
http://localhost:8000
- Try the example prompts provided in the interface:
- "Tell me a joke"
- "What is the meaning of life?"
- "Write a short poem"
- Frontend modifications can be made in
frontend/app.py
- Backend modifications can be made in
backend/app.py
- After making changes, rebuild the containers:
docker-compose down docker-compose up --build
- If you see API key errors, ensure your
.env
file is properly configured - If containers fail to start, check if ports 7860 and 8000 are available
- For connection issues, ensure both services are running:
docker-compose ps