An AI-powered WhatsApp assistant for Bible translators, using FastAPI, Twilio, OpenAI, and ChromaDB. Easily runs both locally and in production with environment-aware configuration.
This app uses a .env
file for local development and Fly.io secrets in production.
- Create a
.env
file
Copy the example template and edit your own:
cp .env.example .env
- Edit your
.env
Set your OpenAI API key and any optional overrides:
FLY_IO=0
OPENAI_API_KEY=sk-...
# Optional: override local data path
# DATA_DIR=/your/custom/path
- Install dependencies
pip install -r requirements.txt
- Run the app
From your project directory:
uvicorn bt_servant:app --reload
We use a centralized config system (config.py
) to determine behavior based on environment:
Variable | Used For | Description |
---|---|---|
FLY_IO |
Detect Fly.io environment | 1 = running on Fly |
DATA_DIR |
Path to ChromaDB + history | Optional override of the default data folder |
OPENAI_API_KEY |
Auth for OpenAI | Always required |
If DATA_DIR
is not provided, it defaults to:
- ✅
/data
whenFLY_IO=1
(Fly volume mount) - ✅
./data
relative to the codebase whenFLY_IO=0
(local)
This project is Fly.io-ready and uses a Dockerfile for deployment.
In production:
- You do not need a
.env
file - Instead, run:
fly secrets set FLY_IO=1
fly secrets set OPENAI_API_KEY=sk-...
You can also set DATA_DIR
if needed, but by default it uses the mounted volume at /data
.
Add this line in any Python file to see what config is being used:
from config import Config
print("Using data directory:", Config.DATA_DIR)
- If your app is writing to the wrong folder (
/app/data
instead of/data
), double-check yourDATA_DIR
resolution. - Use
fly ssh console
to inspect files on your production volume. - To test changes, run
fly deploy --no-cache
for a clean rebuild.
This file should be included in your repo:
# Copy this to .env and customize
FLY_IO=0
OPENAI_API_KEY=your-key-here
# Optional override
# DATA_DIR=./my-data
This project exposes a GraphQL API at /graphql
that allows you to test your assistant without needing to use WhatsApp.
You can send test messages directly to the assistant using this query:
query {
queryBtServant(query: "What is the Nicene Creed?")
}
The assistant will respond as if the question had come from a real user, using the default test user_id
configured in the backend.
This is a great way to:
- Test response formatting
- Debug LLM behavior
- Evaluate updates to your assistant without going through Twilio
Just start the server locally (uvicorn bt_servant:app --reload
) and navigate to http://localhost:8000/graphql
in your browser.
Happy translating 🚀📖