Project Atma is a cognitive AI agent designed to overcome the key weaknesses of large language models (LLMs) and provide a robust, transparent, and adaptable solution for automating complex tasks with Python.
While LLMs are powerful tools, they have several limitations that prevent them from being truly autonomous agents:
- Lack of True Reasoning and Planning: LLMs struggle with multi-step problems. They can't take a high-level goal and break it down into a sequence of concrete actions.
- Static Knowledge & Lack of Long-Term Memory: Models can't learn in real-time or remember information across different sessions, making them unreliable for dynamic business environments.
- Lack of Transparency (The "Black Box" Problem): It's hard to trust an AI when you can't see how it reached a conclusion.
Project Atma addresses these weaknesses head-on:
- 
Reasoning and Planning: Atma uses a state machine defined in Atma_Protocol.mdand a cognitive cycle inAtma_Agent.md. This allows the agent to break down high-level goals into a series of discrete, manageable states, from understanding the user's intent to inventing and validating its own tools.
- 
Dynamic Knowledge & Long-Term Memory: Atma has a persistent memory in the form of atma_tool_cache.json. When the agent invents a new script to solve a problem, it caches the script's metadata. This allows the agent to reuse previously created tools, effectively learning and expanding its capabilities over time.
- 
Transparency: The entire process is transparent. The agent's current state and actions are explicit. By observing the agent's transitions through the states defined in the protocol, a user can understand exactly how the agent is approaching a problem. 
The agent's architecture is composed of three key components:
- Atma_Agent.md: This is the "mind" of the agent. It's a symbolic interpreter that executes the logic for each state in the cognitive cycle.
- Atma_Protocol.md: This is the "rulebook" or the state machine that the agent must follow. It defines the valid states and the transitions between them.
- atma_tool_cache.json: This file acts as the agent's long-term memory, storing information about the tools it has created.
Important: This agent is designed for a Linux-based environment. It relies on shell commands such as mkdir -p, head, and rm -rf that are not available on standard Windows systems. For Windows users, it is recommended to run the agent within the Windows Subsystem for Linux (WSL).
The agent has been tested with the Gemini CLI, using Gemini 2.5 Pro as the underlying LLM.
To launch the agent, load the Atma_Agent.md file. Upon initialization, the agent will automatically load its required protocol (Atma_Protocol.md) and memory (atma_tool_cache.json).
The agent has been tested on the sample databases provided in this project. To test the agent, you can ask it to update the main database with the new leads:
"Update the database.csv with the new leads from new_leads.csv."
This project is released under a custom commercial license. See the LICENSE file for details.