The future is personal agents, and Dash is one of them. It gives you an AI assistant that books restaurants, orders food, and manages communications by filtering spam and prioritizing urgent messages.

An AI-powered platform where each user gets their own personalized agents, designed to blend natural language conversations with dynamic knowledge graph interactions. Each user's personal agent provides a modern, real-time conversational experience while integrating advanced multi-agent reasoning, graph database querying, and automated ingestion of external data sources.
In today's digital world, we're drowning in information across dozens of platforms while juggling countless tasks. What if everyone had their own personal AI agent - not just another chatbot, but a truly personalized assistant that remembers everything about you, learns your preferences, and takes meaningful actions on your behalf? Dash was born from this vision of AI that works for you, understands you, and evolves with you - creating a future where personal agents transform how we interact with technology and manage our digital lives.
Dash provides each user with their own personalized AI agent with powerful capabilities:
-
Personalized Memory & Understanding: Each agent maintains a persistent memory of user preferences and past interactions using a private knowledge graph database.
-
Intelligent Communication Management: Dash monitors incoming messages across email, WhatsApp, and Slack, automatically categorizing them as spam, important, or urgent, and sending appropriate notifications.
-
Action-Taking Capabilities: Agents can perform tasks through integrated services, such as booking restaurants through Dineout, ordering food from delivery services, and sending emails (with draft confirmation).
Note: Currently, all API integrations (WhatsApp, Slack, email, Dineout, food ordering) are implemented as mock services for development purposes, not connected to real external APIs.
-
Personalized AI Agents for Each User
- Every user gets their own dedicated AI assistant
- Personal agents with user-specific memory and preferences
- Action-taking capabilities through mock service integrations
-
Conversational AI with Memory & Action Capabilities
- Multi-turn conversations using interchangeable LLMs
- Support for both question answering and action-taking
- Context-aware responses with historical conversation tracking
-
Knowledge Graph Integration
- Natural language queries translate to secure AQL queries
- Per-user private database alongside a shared, read-only common DB
- Dynamic relationship mapping between entities
-
Real-Time Interaction
- WebSocket-based real-time updates and "thinking" indicators
- Low-latency bidirectional communication
-
Intelligent Message Prioritization
- Automatic detection of spam, important, and urgent messages/emails
- Smart notification system based on message priority
- User-configurable priority thresholds and notification preferences
-
Consumer Agents for External Data
- Monitor external message sources (emails, Slack, WhatsApp, etc.)
- Automated data ingestion and processing
-
Admin Simulation Interface (Blockly)
- Simulate external events using a graphical interface
- Test system behavior with controlled inputs
The system currently uses dummy/mock implementations for the following service integrations:
- Messaging Services: WhatsApp, Slack, Email (simulated communication)
- Food Services: Restaurant bookings (Dineout), Food ordering
- Productivity: Calendar, Contacts, Email drafting
- Notifications: Push notifications, Priority alerts, Message categorization
These simulated integrations allow for development and testing of the agent logic without requiring actual connections to external services.
-
Frontend:
- ReactJS with Material UI (in progress)
- WebSocket client for real-time updates
- Responsive design for multiple device types
-
Backend:
- Flask Python framework
- LangChain and LangGraph for AI agent orchestration
- Flask-SocketIO for WebSocket communication
-
Database:
- ArangoDB for graph-based data storage
- Separate database environments for development, testing, and production
-
Testing:
- Comprehensive Python test suite
- End-to-end testing for critical user flows
-
Agent Framework: Core intelligence uses LangChain and LangGraph to orchestrate complex reasoning flows, allowing the agent to determine when to query databases and when to take actions.
-
Knowledge Storage: I implemented ArangoDB as a graph database, with each user getting their own private database (
user_{user_id}
) alongside access to a shared, read-onlycommon_db
. This architecture provides a secure way to have multiple users while ensuring they can't access each other's data. -
Consumer Agents: Specialized Celery-based agents monitor external message sources, using LLMs to extract relevant identifiers and update the knowledge graph. These agents also analyze messages to classify them as spam, important, or urgent, sending appropriate notifications to users.
-
Getting data for the agent
- Solution: Obtained sample restaurant data from www.foodspark.io after contacting them.
-
How to have ArangoDB database sharing by all users while maintaining data privacy
- Solution: Created a private database and user in ArangoDB for each Dash user. Like password hashes, I store these DB credentials in the _system database which only my backend can access. Additionally, I have a common database with read access for everyone containing restaurant data, dishes, prices, and ratings.
-
Limited support for defining custom nodes and edges when importing data
- Solution: Wrote a custom importer with parallelization to import exponentially faster while maintaining proper node types and connections.
-
Lack of resources for low-level design of personal agent systems
- Solution: After creative thinking, developed a design using LangChain, LangGraph, and Celery. Determined which prompts work effectively and how a personal agent should be implemented to be flexible for extensions and scalable.
-
UI development challenges as a backend developer
-
Finding the most cost-effective and scalable agent implementation
- Solution: Kept prompts concise and removed unnecessary LLM calls. Modified ArangoGraphQAChain to support turning off LLM QA generation when only raw AQL query output is needed.
-
Running LLM-generated code securely and efficiently
- Solution: Implemented a sandbox execution environment using Docker with Jupyter notebooks, with one cell loading the database into NetworkX and another cell for model-generated code.
-
Limiting events and tools to the user level
- Solution: Implemented a factory design pattern where all tools and agents are generated for each user based on user ID, providing access only to that user's private database.
I'm proud of my creative problem-solving approach, finding innovative solutions to challenging technical problems. From database isolation techniques to secure code execution, I've developed a robust architecture that separates user data while maintaining shared knowledge. My optimization of LLM usage and custom importers demonstrates my commitment to building not just a functional system, but one that can scale efficiently.
Through developing Dash, I gained deep expertise in agentic frameworks like LangChain and LangGraph, effective prompt engineering techniques, and UI development basics. Most importantly, I discovered the power of ArangoDB as a graph database solution, which has become my favorite database technology for its flexibility and performance in knowledge graph applications.
- Python 3.9+
- Node.js 16+
- ArangoDB 3.9+
- Docker and Docker Compose (optional, but recommended)
-
Clone the repository:
git clone <repository-url> cd dash
-
Create a
.env
file in the root directory with your API keys:OPENAI_API_KEY=your_openai_api_key ARANGO_URL=http://localhost:8529 ARANGO_DB_NAME=dash ARANGO_USERNAME=root ARANGO_PASSWORD=your_password
-
Start the application using Docker Compose:
cd backend docker-compose up -d
-
Access the application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:5000
-
Navigate to the backend directory:
cd backend
-
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install -r requirements.txt
-
Set up ArangoDB:
- Install ArangoDB from official website
- Create a database named "dash"
- Set up a user with appropriate permissions
-
Update
.env
file with your database credentials and API keys:OPENAI_API_KEY=your_openai_api_key ARANGO_URL=http://localhost:8529 ARANGO_DB_NAME=dash ARANGO_USERNAME=root ARANGO_PASSWORD=your_password
-
Run the application:
python run.py
-
Navigate to the frontend directory:
cd frontend
-
Install dependencies:
npm install
-
Start the development server:
npm start
-
Access the application at http://localhost:3000
dash/
βββ backend/
β βββ app/
β β βββ agents/ # LangGraph agents
β β βββ consumer_agents/ # External data processors
β β βββ models.py # Database models
β β βββ routes/ # API endpoints
β β βββ __init__.py # App initialization
β βββ migrations/ # Database migrations
β βββ tests/ # Python test suite
β βββ run.py # Entry point
β βββ requirements.txt # Python dependencies
βββ frontend/
β βββ public/ # Static assets
β βββ src/
β β βββ components/ # React components
β β βββ contexts/ # React contexts
β β βββ pages/ # Page components
β β βββ services/ # API and WebSocket services
β β βββ App.js # Main app component
β βββ package.json # Node.js dependencies
βββ README.md # This file
- β Basic chat functionality with LangGraph integration
- β Knowledge graph integration with ArangoDB
- β Consumer agents for external data sources
- β¬ Admin simulation interface with Blockly
- β¬ Comprehensive test coverage
- β¬ Production deployment pipeline
- β¬ Frontend implementation with React and Material UI
I plan to continue development during weekends and free time, focusing on implementing the UI, improving various tools, and exploring different approaches to writing agents. My goal is to steadily enhance Dash's capabilities while maintaining its core vision of providing truly personal AI agents that understand and act according to each user's unique needs and preferences.
- Keep code files under 300 lines to maintain readability
- Write tests for all major functionality
- Follow environment-specific configurations for dev, test, and prod
- Avoid data mocking in production code
- Prefer simple solutions and avoid code duplication
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.