PREAA (Providing Reusable Equitable AI Access in Academia) is a comprehensive open-source platform designed to democratize AI access within academic institutions. Our carefully curated stack of open-source technologies has been researched and selected to provide the most effective, scalable, and equitable AI solutions for educational environments.
Academic institutions face significant barriers when implementing AI technologies - from cost constraints to technical complexity. PREAA addresses these challenges by providing a battle-tested, open-source technology stack that enables institutions to deploy enterprise-grade AI capabilities without the typical financial and technical overhead.
PREAA integrates several best-in-class open-source technologies to create a comprehensive AI platform:
- LibreChat: A ChatGPT-like all-in-one interface providing authentication, conversation history, and multi-model support
- LiteLLM: Unified proxy for LLM requests with rate limiting and standardized OpenAI-compatible interface
- LangFlow: Visual LLM workflow builder supporting multiple AI providers, agentic tools, RAG components, and custom integrations
- LangFuse: Comprehensive LLM analytics platform for token usage tracking, completion costs, and performance metrics
- Embedded Chat Widget: Lightweight, embeddable chat interface for seamless integration into existing web applications
- Standalone Chat Client: Full-featured React-based chat application with modern UI/UX
- Admin Dashboard: Next.js-based administrative interface for system management and configuration
- Helper Backend: NestJS-based service providing request proxying capabilities and administrative functions
- Custom Integrations:
- LangFlow custom components for enhanced chat completion interfaces
- LiteLLM custom providers for workflow integration
- PostgreSQL: Primary database for user data, conversation history, and analytics
- MongoDB: Document storage for LibreChat data
- Redis: Caching and session management
- ClickHouse: High-performance analytics database for LangFuse metrics
- MinIO: S3-compatible object storage
- Prometheus & Grafana: Monitoring and visualization stack
- Docker and Docker Compose
- Git
- OpenSSL (for key generation)
-
Clone the repository:
git clone <repository-url> cd PREAA
-
Navigate to the deployment directory:
cd deploy/local
-
Set up environment variables:
Copy the sample environment files and configure them:
# Copy all sample environment files cp config/.env.clickhouse.sample config/.env.clickhouse cp config/.env.minio.sample config/.env.minio cp config/.env.redis.sample config/.env.redis cp config/.env.psql.sample config/.env.psql cp config/.env.langfuse.sample config/.env.langfuse cp config/.env.librechat.sample config/.env.librechat cp config/.env.litellm.sample config/.env.litellm cp config/.env.librechat-metrics.sample config/.env.librechat-metrics cp config/.env.langflow.sample config/.env.langflow
-
Configure required secrets:
Database passwords (generate secure passwords for each):
config/.env.clickhouse
: SetCLICKHOUSE_PASSWORD
config/.env.minio
: SetMINIO_ROOT_PASSWORD
config/.env.redis
: SetREDIS_PASSWORD
config/.env.psql
: SetPOSTGRES_PASSWORD
LangFuse configuration (
config/.env.langfuse
):# Generate encryption key ENCRYPTION_KEY=$(openssl rand -hex 32) # Set the following variables: # SALT=<any secure string> # CLICKHOUSE_PASSWORD=<matching value from .env.clickhouse> # LANGFUSE_S3_EVENT_UPLOAD_SECRET_ACCESS_KEY=<matching MINIO_ROOT_PASSWORD> # LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY=<matching MINIO_ROOT_PASSWORD> # REDIS_AUTH=<matching REDIS_PASSWORD> # DATABASE_URL=postgresql://postgres:<POSTGRES_PASSWORD>@postgres:5432/postgres
LibreChat configuration (
config/.env.librechat
):# Generate LibreChat secrets using their generator: https://www.librechat.ai/toolkit/creds_generator # Set: CREDS_KEY, CREDS_IV, JWT_SECRET, JWT_REFRESH_SECRET # Set: LITELLM_API_KEY=<matching LITELLM_MASTER_KEY from .env.litellm>
LiteLLM configuration (
config/.env.litellm
):# Set: LITELLM_MASTER_KEY=<any secure string> # Set: DATABASE_URL=postgresql://postgres:<POSTGRES_PASSWORD>@postgres:5432/postgres
-
Start the platform:
docker-compose up -d
-
Access the services:
- LibreChat (Main Interface): http://localhost:3080
- LangFuse (Analytics): http://localhost:3000
- LiteLLM (Proxy Management): http://localhost:4000
- LangFlow (Workflow Builder): http://localhost:7860
- Grafana (Monitoring): http://localhost:3002
- Admin Dashboard: Configure separately in packages/admin
- Create a LangFuse account at http://localhost:3000
- Set up a new organization and project
- Generate API keys (public and private)
- Access LiteLLM at http://localhost:4000/ui
- Login with username
admin
and yourLITELLM_MASTER_KEY
as password - Under "Logging & Alerts", add a LangFuse callback
- Enter your LangFuse API keys and set host to
http://langfuse-web:3000
- Test the integration
Each frontend package can be developed independently:
# Embedded Chat Widget
cd packages/embedded-chat
npm install
npm run dev
# Admin Dashboard
cd packages/admin
npm install
npm run dev:https # Note: Uses HTTPS for functionality
# Chat Client
cd packages/chat-client
npm install
npm run dev
# Helper Backend (NestJS)
cd packages/helper-backend
npm install
npm run start:dev
LangFlow Components:
cd packages/langflow
pip install -r requirements.txt
# Restart LangFlow container to see changes
LiteLLM Custom Providers:
cd packages/litellm
pip install -r requirements.txt
# Restart LiteLLM container to see changes
The platform includes comprehensive monitoring capabilities:
- LangFuse: Track LLM usage, costs, and performance metrics
- Prometheus: System metrics collection
- Grafana: Visualization and dashboards
- LibreChat Metrics: Specialized metrics exporter for LibreChat
- All services run in isolated Docker containers
- Environment variables store sensitive configuration
- Database connections use authentication
- API keys provide service-to-service authentication
- HTTPS support available for production deployments
We welcome contributions from the academic community! Please see our contribution guidelines and feel free to:
- Report issues or bugs
- Suggest new features
- Submit pull requests
- Share your deployment experiences
This project is licensed under the LICENSE file in the repository.
For support, please:
- Check the individual component documentation in each package
- Review the deployment configuration files
- Open an issue in the repository
- Consult the upstream documentation for each integrated technology
PREAA - Empowering academic institutions with accessible, equitable AI technology.