AI-assisted educational platform for teachers and students.
Phase 1 Complete: Data Models & Testing Infrastructure ✅
- ✅ Models: All Django models implemented according to design specifications
- ✅ Testing: 149 comprehensive test cases with 350x performance optimization
- ✅ Architecture: Clean, modular structure with proper separation of concerns
- ✅ Documentation: Comprehensive testing guide and setup instructions
Next Phase: Service Layer & API Development 🔄
This project uses uv workspaces for dependency management.
apps/accounts- User management and authenticationapps/conversations- AI conversation handling and submissionsapps/homeworks- Homework and section managementapps/llm- LLM configuration and servicescore- Shared utilities and base classespermissions- Permission decorators and utilitiesservices- Business logic service layersrc/llteacher- Main Django project
- Install uv:
pip install uv - Install dependencies:
uv sync - Run migrations:
python manage.py migrate - Create superuser:
python manage.py createsuperuser - Configure API Key (see Configuration section below)
- Populate test data:
python manage.py populate_test_database - Run development server:
python manage.py runserver
The AI tutoring functionality requires an OpenAI API key. You have two options:
- Start the development server:
python manage.py runserver - Go to the admin interface:
http://localhost:8000/admin/ - Navigate to LLM > LLM Configs
- Edit the "Test GPT-4 Config" entry
- Replace
test-api-key-placeholderwith your actual OpenAI API key - Save the configuration
If you want to set the API key during initial setup:
- Edit
src/llteacher/management/commands/populate_test_database.py - Find the line with
api_key='test-api-key-placeholder' - Replace the placeholder with your actual OpenAI API key
- Run:
python manage.py populate_test_database --reset
- Go to OpenAI's website
- Sign up or log in to your account
- Navigate to the API section
- Generate a new API key
- Copy the key (it starts with
sk-)
- Go to the admin interface:
http://localhost:8000/admin/ - Navigate to LLM > LLM Configs
- Click on your configuration
- Use the "Test Configuration" feature (if available)
- Or create a conversation as a student to test the AI responses
- Never commit real API keys to version control
- The test database includes a placeholder API key that won't work
- You must replace it with a real key for AI functionality to work
- API keys should be kept secure and not shared
"No valid LLM configuration available"
- Check that you have a default LLM config marked as active
- Verify your API key is correctly set (not the placeholder)
"Technical issue" errors in conversations
- Check the Django logs for specific API errors
- Verify your OpenAI API key has sufficient credits
- Ensure the API key has the correct permissions
AI responses not generating
- Confirm the LLM config is set as default (
is_default=True) - Check that the configuration is active (
is_active=True) - Verify the model name (e.g., 'gpt-4', 'gpt-3.5-turbo') is correct
- Each app is a separate workspace member with its own
pyproject.toml - Use
uv add <package>to add dependencies to specific workspaces - Use
uv syncto install all workspace dependencies
The project includes comprehensive testing with 149 test cases covering all models and their functionality.
# Run all tests (fastest - uses in-memory database)
uv run python run_tests.py
# Run with verbose output
uv run python run_tests.py --verbosity=2
# Run specific app tests
uv run python run_tests.py apps.accounts.tests- Standard Django tests: ~21.348 seconds
- Optimized tests: ~0.061 seconds
- Speed improvement: 350x faster! 🚀
- ✅ Models: Complete coverage of all Django models
- ✅ Relationships: Foreign keys, one-to-one, cascade deletes
- ✅ Validation: Custom validation methods and business logic
- ✅ Edge Cases: Special characters, long content, boundaries
- ✅ Properties: Custom properties and computed fields
For detailed testing information, see TESTING.md.