Lumon AI is a lightweight personal AI assistant and automation runtime inspired by OpenClaw.
It is built on top of nanobot, but this fork takes a different direction: faster implementation, less workflow churn, and a contributor experience that does not revolve around upstream maintenance habits.
I forked it because I personally disagree with the HKUDS/nanobot workflow: repeated cherry-pick-first development, branch resets that can drop features, and a maintainer-biased OSS process that makes outside contribution more painful than it should be.
For compatibility, the operational names below still use the existing nanobot CLI, package name, Python module names, and ~/.nanobot paths.
Lumon AI is for educational, research, and technical exchange purposes only.
🪶 Ultra-Lightweight: A lightweight implementation of OpenClaw with a small, readable core.
🔬 Research-Ready: Clean code that is easy to understand, modify, and extend.
⚡️ Faster Iteration: This fork prioritizes direct implementation over workflow churn.
💎 Practical Compatibility: The runtime stays compatible with the current nanobot commands and config layout while the product identity moves to Lumon AI.
- Why Lumon AI
- ✨ Features
- 📦 Install
- 🚀 Quick Start
- 💬 Chat Apps
- 🌐 Agent Social Network
- ⚙️ Configuration
- 🧩 Multiple Instances
- 💻 CLI Reference
- 🐳 Docker
- 🐧 Linux Service
- 📁 Project Structure
- 🤝 Contribute & Roadmap
📈 24/7 Real-Time Market Analysis |
🚀 Full-Stack Software Engineer |
📅 Smart Daily Routine Manager |
📚 Personal Knowledge Assistant |
|---|---|---|---|
| Discovery • Insights • Trends | Develop • Deploy • Scale | Schedule • Automate • Organize | Learn • Memory • Reasoning |
Install from source (latest features, recommended for development)
git clone https://github.com/EvanNotFound/lumon.git
cd lumon
uv tool install --force --editable .Install with uv (stable, fast)
uv tool install nanobot-aiInstall from PyPI (stable)
pip install nanobot-aiPyPI / pip
pip install -U nanobot-ai
nanobot --versionuv
uv tool upgrade nanobot-ai
nanobot --versionUsing WhatsApp? Rebuild the local bridge after upgrading:
rm -rf ~/.nanobot/bridge
nanobot channels login whatsappTip
Set your API key in ~/.nanobot/config.json.
Get API keys: OpenRouter (Global)
For other LLM providers, please see the Providers section.
For web search capability setup, please see Web Search.
1. Initialize
nanobot onboardUse nanobot onboard --wizard if you want the interactive setup wizard.
2. Configure (~/.nanobot/config.json)
Configure these two parts in your config (other options have defaults).
Set your API key (e.g. OpenRouter, recommended for global users):
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-xxx"
}
}
}Set your model (optionally pin a provider — defaults to auto-detection):
{
"agents": {
"defaults": {
"model": "anthropic/claude-opus-4-5",
"provider": "openrouter"
}
}
}3. Chat
nanobot agentThat's it! You have a working AI assistant in 2 minutes.
Connect Lumon AI to your favorite chat platform. Want to build your own? See the Channel Plugin Guide.
| Channel | What you need |
|---|---|
| Telegram | Bot token from @BotFather |
| Discord | Bot token + Message Content intent |
QR code scan (nanobot channels login whatsapp) |
|
| WeChat (Weixin) | QR code scan (nanobot channels login weixin) |
| Feishu | App ID + App Secret |
| DingTalk | App Key + App Secret |
| Slack | Bot token + App-Level token |
| Matrix | Homeserver URL + Access token |
| IMAP/SMTP credentials | |
| App ID + App Secret | |
| Wecom | Bot ID + Bot Secret |
| Wecom App | Corp ID + Agent ID + Secret + Token + AES Key |
| Mochat | Claw token (auto-setup available) |
Telegram (Recommended)
1. Create a bot
- Open Telegram, search
@BotFather - Send
/newbot, follow prompts - Copy the token
2. Configure
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"],
"silentToolHints": false
}
}
}You can find your User ID in Telegram settings. It is shown as
@yourUserId. Copy this value without the@symbol and paste it into the config file.
3. Run
nanobot gatewayMochat (Claw IM)
Uses Socket.IO WebSocket by default, with HTTP polling fallback.
1. Ask Lumon AI to set up Mochat for you
Simply send this message to Lumon AI (replace xxx@xxx with your real email):
Read https://raw.githubusercontent.com/HKUDS/MoChat/refs/heads/main/skills/nanobot/skill.md and register on MoChat. My Email account is xxx@xxx Bind me as your owner and DM me on MoChat.
Lumon AI will automatically register, configure ~/.nanobot/config.json, and connect to Mochat.
2. Restart gateway
nanobot gatewayThat's it - Lumon AI handles the rest.
Manual configuration (advanced)
If you prefer to configure manually, add the following to ~/.nanobot/config.json:
Keep
claw_tokenprivate. It should only be sent inX-Claw-Tokenheader to your Mochat API endpoint.
{
"channels": {
"mochat": {
"enabled": true,
"base_url": "https://mochat.io",
"socket_url": "https://mochat.io",
"socket_path": "/socket.io",
"claw_token": "claw_xxx",
"agent_user_id": "6982abcdef",
"sessions": ["*"],
"panels": ["*"],
"reply_delay_mode": "non-mention",
"reply_delay_ms": 120000
}
}
}Discord
1. Create a bot
- Go to https://discord.com/developers/applications
- Create an application → Bot → Add Bot
- Copy the bot token
2. Enable intents
- In the Bot settings, enable MESSAGE CONTENT INTENT
- (Optional) Enable SERVER MEMBERS INTENT if you plan to use allow lists based on member data
3. Get your User ID
- Discord Settings → Advanced → enable Developer Mode
- Right-click your avatar → Copy User ID
4. Configure
{
"channels": {
"discord": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_USER_ID"],
"groupPolicy": "mention"
}
}
}
groupPolicycontrols how the bot responds in group channels:
"mention"(default) — Only respond when @mentioned"open"— Respond to all messages DMs always respond when the sender is inallowFrom.- If you set group policy to open create new threads as private threads and then @ the bot into it. Otherwise the thread itself and the channel in which you spawned it will spawn a bot session.
5. Invite the bot
- OAuth2 → URL Generator
- Scopes:
bot - Bot Permissions:
Send Messages,Read Message History - Open the generated invite URL and add the bot to your server
6. Run
nanobot gatewayMatrix (Element)
Install Matrix dependencies first:
pip install nanobot-ai[matrix]1. Create/choose a Matrix account
- Create or reuse a Matrix account on your homeserver (for example
matrix.org). - Confirm you can log in with Element.
2. Get credentials
- You need:
userId(example:@nanobot:matrix.org)accessTokendeviceId(recommended so sync tokens can be restored across restarts)
- You can obtain these from your homeserver login API (
/_matrix/client/v3/login) or from your client's advanced session settings.
3. Configure
{
"channels": {
"matrix": {
"enabled": true,
"homeserver": "https://matrix.org",
"userId": "@nanobot:matrix.org",
"accessToken": "syt_xxx",
"deviceId": "NANOBOT01",
"e2eeEnabled": true,
"allowFrom": ["@your_user:matrix.org"],
"groupPolicy": "open",
"groupAllowFrom": [],
"allowRoomMentions": false,
"maxMediaBytes": 20971520
}
}
}Keep a persistent
matrix-storeand stabledeviceId— encrypted session state is lost if these change across restarts.
| Option | Description |
|---|---|
allowFrom |
User IDs allowed to interact. Empty denies all; use ["*"] to allow everyone. |
groupPolicy |
open (default), mention, or allowlist. |
groupAllowFrom |
Room allowlist (used when policy is allowlist). |
allowRoomMentions |
Accept @room mentions in mention mode. |
e2eeEnabled |
E2EE support (default true). Set false for plaintext-only. |
maxMediaBytes |
Max attachment size (default 20MB). Set 0 to block all media. |
4. Run
nanobot gatewayRequires Node.js ≥18.
1. Link device
nanobot channels login whatsapp
# Scan QR with WhatsApp → Settings → Linked Devices2. Configure
{
"channels": {
"whatsapp": {
"enabled": true,
"allowFrom": ["+1234567890"]
}
}
}3. Run (two terminals)
# Terminal 1
nanobot channels login whatsapp
# Terminal 2
nanobot gatewayWhatsApp bridge updates are not applied automatically for existing installations. After upgrading Lumon AI, rebuild the local bridge with:
rm -rf ~/.nanobot/bridge && nanobot channels login whatsapp
Feishu
Uses WebSocket long connection — no public IP required.
1. Create a Feishu bot
- Visit Feishu Open Platform
- Create a new app → Enable Bot capability
- Permissions:
im:message(send messages) andim:message.p2p_msg:readonly(receive messages)- Streaming replies (default in Lumon AI): add
cardkit:card:write(often labeled Create and update cards in the Feishu developer console). Required for CardKit entities and streamed assistant text. Older apps may not have it yet - open Permission management, enable the scope, then publish a new app version if the console requires it. - If you cannot add
cardkit:card:write, set"streaming": falseunderchannels.feishu(see below). The bot still works; replies use normal interactive cards without token-by-token streaming.
- Events: Add
im.message.receive_v1(receive messages)- Select Long Connection mode (requires running the gateway first to establish connection)
- Get App ID and App Secret from "Credentials & Basic Info"
- Publish the app
2. Configure
{
"channels": {
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "xxx",
"encryptKey": "",
"verificationToken": "",
"allowFrom": ["ou_YOUR_OPEN_ID"],
"groupPolicy": "mention",
"streaming": true
}
}
}
streamingdefaults totrue. Usefalseif your app does not havecardkit:card:write(see permissions above).encryptKeyandverificationTokenare optional for Long Connection mode.allowFrom: Add your open_id (find it in the agent logs when you message the bot). Use["*"]to allow all users.groupPolicy:"mention"(default — respond only when @mentioned),"open"(respond to all group messages). Private chats always respond.
3. Run
nanobot gateway[!TIP] Feishu uses WebSocket to receive messages — no webhook or public IP needed!
QQ (QQ单聊)
Uses botpy SDK with WebSocket — no public IP required. Currently supports private messages only.
1. Register & create bot
- Visit QQ Open Platform → Register as a developer (personal or enterprise)
- Create a new bot application
- Go to 开发设置 (Developer Settings) → copy AppID and AppSecret
2. Set up sandbox for testing
- In the bot management console, find 沙箱配置 (Sandbox Config)
- Under 在消息列表配置, click 添加成员 and add your own QQ number
- Once added, scan the bot's QR code with mobile QQ → open the bot profile → tap "发消息" to start chatting
3. Configure
allowFrom: Add your openid (find it in the agent logs when you message the bot). Use["*"]for public access.msgFormat: Optional. Use"plain"(default) for maximum compatibility with legacy QQ clients, or"markdown"for richer formatting on newer clients.- For production: submit a review in the bot console and publish. See QQ Bot Docs for the full publishing flow.
{
"channels": {
"qq": {
"enabled": true,
"appId": "YOUR_APP_ID",
"secret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_OPENID"],
"msgFormat": "plain"
}
}
}4. Run
nanobot gatewayNow send a message to the bot from QQ — it should respond!
DingTalk (钉钉)
Uses Stream Mode — no public IP required.
1. Create a DingTalk bot
- Visit DingTalk Open Platform
- Create a new app -> Add Robot capability
- Configuration:
- Toggle Stream Mode ON
- Permissions: Add necessary permissions for sending messages
- Get AppKey (Client ID) and AppSecret (Client Secret) from "Credentials"
- Publish the app
2. Configure
{
"channels": {
"dingtalk": {
"enabled": true,
"clientId": "YOUR_APP_KEY",
"clientSecret": "YOUR_APP_SECRET",
"allowFrom": ["YOUR_STAFF_ID"]
}
}
}
allowFrom: Add your staff ID. Use["*"]to allow all users.
3. Run
nanobot gatewaySlack
Uses Socket Mode — no public URL required.
1. Create a Slack app
- Go to Slack API → Create New App → "From scratch"
- Pick a name and select your workspace
2. Configure the app
- Socket Mode: Toggle ON → Generate an App-Level Token with
connections:writescope → copy it (xapp-...) - OAuth & Permissions: Add bot scopes:
chat:write,reactions:write,app_mentions:read - Event Subscriptions: Toggle ON → Subscribe to bot events:
message.im,message.channels,app_mention→ Save Changes - App Home: Scroll to Show Tabs → Enable Messages Tab → Check "Allow users to send Slash commands and messages from the messages tab"
- Install App: Click Install to Workspace → Authorize → copy the Bot Token (
xoxb-...)
3. Configure Lumon AI
{
"channels": {
"slack": {
"enabled": true,
"botToken": "xoxb-...",
"appToken": "xapp-...",
"allowFrom": ["YOUR_SLACK_USER_ID"],
"groupPolicy": "mention"
}
}
}4. Run
nanobot gatewayDM the bot directly or @mention it in a channel — it should respond!
[!TIP]
groupPolicy:"mention"(default — respond only when @mentioned),"open"(respond to all channel messages), or"allowlist"(restrict to specific channels).- DM policy defaults to open. Set
"dm": {"enabled": false}to disable DMs.
Give Lumon AI its own email account. It polls IMAP for incoming mail and replies via SMTP - like a personal email assistant.
1. Get credentials (Gmail example)
- Create a dedicated Gmail account for your bot (e.g.
my-nanobot@gmail.com) - Enable 2-Step Verification → Create an App Password
- Use this app password for both IMAP and SMTP
2. Configure
consentGrantedmust betrueto allow mailbox access. This is a safety gate — setfalseto fully disable.allowFrom: Add your email address. Use["*"]to accept emails from anyone.smtpUseTlsandsmtpUseSsldefault totrue/falserespectively, which is correct for Gmail (port 587 + STARTTLS). No need to set them explicitly.- Set
"autoReplyEnabled": falseif you only want to read/analyze emails without sending automatic replies.verifySpfandverifyDkimdefault totrueand reject spoofed senders unlessAuthentication-Resultsshowsspf=pass/dkim=pass.
{
"channels": {
"email": {
"enabled": true,
"consentGranted": true,
"imapHost": "imap.gmail.com",
"imapPort": 993,
"imapUsername": "my-nanobot@gmail.com",
"imapPassword": "your-app-password",
"smtpHost": "smtp.gmail.com",
"smtpPort": 587,
"smtpUsername": "my-nanobot@gmail.com",
"smtpPassword": "your-app-password",
"fromAddress": "my-nanobot@gmail.com",
"allowFrom": ["your-real-email@gmail.com"],
"verifySpf": true,
"verifyDkim": true
}
}
}3. Run
nanobot gatewayWeChat (微信 / Weixin)
Uses HTTP long-poll with QR-code login via the ilinkai personal WeChat API. No local WeChat desktop client is required.
Weixin support is available from source checkout, but is not included in the current PyPI release yet.
1. Install from source
git clone https://github.com/EvanNotFound/lumon.git
cd lumon
pip install -e ".[weixin]"2. Configure
{
"channels": {
"weixin": {
"enabled": true,
"allowFrom": ["YOUR_WECHAT_USER_ID"]
}
}
}
allowFrom: Add the sender ID you see in the agent logs for your WeChat account. Use["*"]to allow all users.token: Optional. If omitted, log in interactively and the CLI will save the token for you.routeTag: Optional. When your upstream Weixin deployment requires request routing, the runtime will send it as theSKRouteTagheader.stateDir: Optional. Defaults to the runtime directory for Weixin state.pollTimeout: Optional long-poll timeout in seconds.
3. Login
nanobot channels login weixinUse --force to re-authenticate and ignore any saved token:
nanobot channels login weixin --force4. Run
nanobot gatewayWecom (企业微信)
Here we use wecom-aibot-sdk-python (community Python version of the official @wecom/aibot-node-sdk).
Uses WebSocket long connection — no public IP required.
1. Install the optional dependency
pip install nanobot-ai[wecom]2. Create a WeCom AI Bot
Go to the WeCom admin console → Intelligent Robot → Create Robot → select API mode with long connection. Copy the Bot ID and Secret.
3. Configure
{
"channels": {
"wecom": {
"enabled": true,
"botId": "your_bot_id",
"secret": "your_bot_secret",
"allowFrom": ["your_id"]
}
}
}4. Run
nanobot gatewayWecom App (企业微信应用)
Uses webhook callback mode — requires a publicly accessible server or port forwarding.
Different from WeCom (WebSocket mode). Choose based on your network environment.
1. Install the optional dependency
pip install wecom-app-svr2. Create a WeCom AI Bot
Go to the WeCom admin console → My Apps → Create App → Enable API mode. Copy the following credentials:
- Corp ID (from the admin console)
- Agent ID (from the app)
- Secret (from the app)
- Token (you set this when configuring the webhook)
- AES Key (you set this when configuring the webhook)
3. Configure the callback URL
In the WeCom app configuration:
- Set callback URL to:
http://<your-server>:<port>/wecom_app - Set the Token and AES Key to match your config
4. Configure
{
"channels": {
"wecom_app": {
"enabled": true,
"token": "your_token",
"corpId": "your_corp_id",
"secret": "your_secret",
"agentid": "your_agent_id",
"aesKey": "your_aes_key",
"host": "0.0.0.0",
"port": 18791,
"path": "/wecom_app",
"allowFrom": ["your_user_id"]
}
}
}| Option | Default | Description |
|---|---|---|
host |
0.0.0.0 |
Server bind address |
port |
18791 |
Server listen port (must match WeCom callback URL) |
path |
/wecom_app |
Callback path |
token |
- | Verification token from WeCom admin |
aesKey |
- | AES key from WeCom admin |
corpId |
- | Your WeCom Corp ID |
agentid |
- | Your WeCom App Agent ID |
secret |
- | Your WeCom App Secret |
welcome_message |
- | Message sent when user enters the chat |
5. Run
nanobot gatewayNote: Wecom App requires the callback URL to be accessible from WeCom servers. If you're running locally, use port forwarding (e.g., ngrok, cloudflare tunnel) or deploy on a public server.
🐈 nanobot is capable of linking to the agent social network (agent community). Just send one message and your nanobot joins automatically!
| Platform | How to Join (send this message to your bot) |
|---|---|
| Moltbook | Read https://moltbook.com/skill.md and follow the instructions to join Moltbook |
| ClawdChat | Read https://clawdchat.ai/skill.md and follow the instructions to join ClawdChat |
Simply send the command above to your nanobot (via CLI or any chat channel), and it will handle the rest.
Config file: ~/.nanobot/config.json
Tip
- Groq provides free voice transcription via Whisper. If configured, Telegram voice messages will be automatically transcribed.
- MiniMax Coding Plan: Exclusive discount links for the nanobot community: Overseas · Mainland China
- MiniMax (Mainland China): If your API key is from MiniMax's mainland China platform (minimaxi.com), set
"apiBase": "https://api.minimaxi.com/v1"in your minimax provider config. - VolcEngine / BytePlus Coding Plan: Use dedicated providers
volcengineCodingPlanorbyteplusCodingPlaninstead of the pay-per-usevolcengine/byteplusproviders. - Zhipu Coding Plan: If you're on Zhipu's coding plan, set
"apiBase": "https://open.bigmodel.cn/api/coding/paas/v4"in your zhipu provider config. - Alibaba Cloud BaiLian: If you're using Alibaba Cloud BaiLian's OpenAI-compatible endpoint, set
"apiBase": "https://dashscope.aliyuncs.com/compatible-mode/v1"in your dashscope provider config. - Step Fun (Mainland China): If your key is from Step Fun (stepfun.com), use provider
stepfunwith"apiBase": "https://api.stepfun.com/v1".
| Provider | Purpose | Get API Key |
|---|---|---|
custom |
Any OpenAI-compatible endpoint | — |
openrouter |
LLM (recommended, access to all models) | openrouter.ai |
volcengine |
LLM (VolcEngine, pay-per-use) | Coding Plan · volcengine.com |
byteplus |
LLM (VolcEngine international, pay-per-use) | Coding Plan · byteplus.com |
anthropic |
LLM (Claude direct) | console.anthropic.com |
azure_openai |
LLM (Azure OpenAI) | portal.azure.com |
openai |
LLM (GPT direct) | platform.openai.com |
deepseek |
LLM (DeepSeek direct) | platform.deepseek.com |
groq |
LLM + Voice transcription (Whisper) | console.groq.com |
minimax |
LLM (MiniMax direct) | platform.minimaxi.com |
gemini |
LLM (Gemini direct) | aistudio.google.com |
aihubmix |
LLM (API gateway, access to all models) | aihubmix.com |
siliconflow |
LLM (SiliconFlow/硅基流动) | siliconflow.cn |
dashscope |
LLM (Qwen) | dashscope.console.aliyun.com |
moonshot |
LLM (Moonshot/Kimi) | platform.moonshot.cn |
zhipu |
LLM (Zhipu GLM) | open.bigmodel.cn |
stepfun |
LLM (Step Fun/阶跃星辰) | platform.stepfun.com |
ollama |
LLM (local, Ollama) | — |
mistral |
LLM | docs.mistral.ai |
ovms |
LLM (local, OpenVINO Model Server) | docs.openvino.ai |
vllm |
LLM (local, any OpenAI-compatible server) | — |
openai_codex |
LLM (Codex, OAuth) | nanobot provider login openai-codex |
github_copilot |
LLM (GitHub Copilot, OAuth) | nanobot provider login github-copilot |
OpenAI Codex (OAuth)
Codex uses OAuth instead of API keys. Requires a ChatGPT Plus or Pro account.
No providers.openaiCodex block is needed in config.json; nanobot provider login stores the OAuth session outside config.
1. Login:
nanobot provider login openai-codex2. Set model (merge into ~/.nanobot/config.json):
{
"agents": {
"defaults": {
"model": "openai-codex/gpt-5.1-codex"
}
}
}3. Chat:
nanobot agent -m "Hello!"
# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"
# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"Docker users: use
docker run -itfor interactive OAuth login.
GitHub Copilot (OAuth)
GitHub Copilot uses OAuth instead of API keys. Requires a GitHub account with a plan configured.
No providers.githubCopilot block is needed in config.json; nanobot provider login stores the OAuth session outside config.
1. Login:
nanobot provider login github-copilot2. Set model (merge into ~/.nanobot/config.json):
{
"agents": {
"defaults": {
"model": "github-copilot/gpt-4.1"
}
}
}3. Chat:
nanobot agent -m "Hello!"
# Target a specific workspace/config locally
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello!"
# One-off workspace override on top of that config
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test -m "Hello!"Docker users: use
docker run -itfor interactive OAuth login.
OpenAI (direct or via your own proxy)
Use provider openai for direct OpenAI usage or for an OpenAI-compatible proxy that should keep OpenAI semantics.
Responses API is used by default for provider: "openai" unless you explicitly set "apiMode": "chat".
{
"providers": {
"openai": {
"apiKey": "your-openai-api-key",
"apiBase": "https://api.openai.com/v1",
"promptCacheRetention": "24h"
}
},
"agents": {
"defaults": {
"provider": "openai",
"model": "gpt-5"
}
}
}If you use your own OpenAI proxy, keep provider: "openai" and point apiBase at the proxy:
{
"providers": {
"openai": {
"apiKey": "your-proxy-api-key",
"apiBase": "https://your-proxy.example/v1",
"apiMode": "responses",
"promptCacheRetention": "24h"
}
},
"agents": {
"defaults": {
"provider": "openai",
"model": "gpt-5"
}
}
}Valid values:
apiMode:auto,chat,responsespromptCacheRetention:in-memory,24h
Custom Provider (Any OpenAI-compatible API)
Connects directly to any OpenAI-compatible endpoint — LM Studio, llama.cpp, Together AI, Fireworks, Azure OpenAI, or any self-hosted server. Model name is passed as-is.
{
"providers": {
"custom": {
"apiKey": "your-api-key",
"apiBase": "https://api.your-provider.com/v1"
}
},
"agents": {
"defaults": {
"model": "your-model-name"
}
}
}For local servers that don't require a key, set
apiKeyto any non-empty string (e.g."no-key").If your custom endpoint supports the OpenAI Responses API, opt in explicitly:
{ "providers": { "custom": { "apiKey": "your-api-key", "apiBase": "https://api.your-provider.com/v1", "apiMode": "responses", "promptCacheRetention": "24h" } }, "agents": { "defaults": { "provider": "custom", "model": "your-model-name" } } }
customstays on Chat Completions by default. UseapiMode: "responses"only if your endpoint actually supports/v1/responses.
Ollama (local)
Run a local model with Ollama, then add to config:
1. Start Ollama (example):
ollama run llama3.22. Add to config (partial — merge into ~/.nanobot/config.json):
{
"providers": {
"ollama": {
"apiBase": "http://localhost:11434"
}
},
"agents": {
"defaults": {
"provider": "ollama",
"model": "llama3.2"
}
}
}
provider: "auto"also works whenproviders.ollama.apiBaseis configured, but setting"provider": "ollama"is the clearest option.
OpenVINO Model Server (local / OpenAI-compatible)
Run LLMs locally on Intel GPUs using OpenVINO Model Server. OVMS exposes an OpenAI-compatible API at /v3.
Requires Docker and an Intel GPU with driver access (
/dev/dri).
1. Pull the model (example):
mkdir -p ov/models && cd ov
docker run -d \
--rm \
--user $(id -u):$(id -g) \
-v $(pwd)/models:/models \
openvino/model_server:latest-gpu \
--pull \
--model_name openai/gpt-oss-20b \
--model_repository_path /models \
--source_model OpenVINO/gpt-oss-20b-int4-ov \
--task text_generation \
--tool_parser gptoss \
--reasoning_parser gptoss \
--enable_prefix_caching true \
--target_device GPUThis downloads the model weights. Wait for the container to finish before proceeding.
2. Start the server (example):
docker run -d \
--rm \
--name ovms \
--user $(id -u):$(id -g) \
-p 8000:8000 \
-v $(pwd)/models:/models \
--device /dev/dri \
--group-add=$(stat -c "%g" /dev/dri/render* | head -n 1) \
openvino/model_server:latest-gpu \
--rest_port 8000 \
--model_name openai/gpt-oss-20b \
--model_repository_path /models \
--source_model OpenVINO/gpt-oss-20b-int4-ov \
--task text_generation \
--tool_parser gptoss \
--reasoning_parser gptoss \
--enable_prefix_caching true \
--target_device GPU3. Add to config (partial — merge into ~/.nanobot/config.json):
{
"providers": {
"ovms": {
"apiBase": "http://localhost:8000/v3"
}
},
"agents": {
"defaults": {
"provider": "ovms",
"model": "openai/gpt-oss-20b"
}
}
}OVMS is a local server — no API key required. Supports tool calling (
--tool_parser gptoss), reasoning (--reasoning_parser gptoss), and streaming. See the official OVMS docs for more details.
vLLM (local / OpenAI-compatible)
Run your own model with vLLM or any OpenAI-compatible server, then add to config:
1. Start the server (example):
vllm serve meta-llama/Llama-3.1-8B-Instruct --port 80002. Add to config (partial — merge into ~/.nanobot/config.json):
Provider (key can be any non-empty string for local):
{
"providers": {
"vllm": {
"apiKey": "dummy",
"apiBase": "http://localhost:8000/v1"
}
}
}Model:
{
"agents": {
"defaults": {
"model": "meta-llama/Llama-3.1-8B-Instruct"
}
}
}Adding a New Provider (Developer Guide)
nanobot uses a Provider Registry (nanobot/providers/registry.py) as the single source of truth.
Adding a new provider only takes 2 steps — no if-elif chains to touch.
Step 1. Add a ProviderSpec entry to PROVIDERS in nanobot/providers/registry.py:
ProviderSpec(
name="myprovider", # config field name
keywords=("myprovider", "mymodel"), # model-name keywords for auto-matching
env_key="MYPROVIDER_API_KEY", # env var name
display_name="My Provider", # shown in `nanobot status`
default_api_base="https://api.myprovider.com/v1", # OpenAI-compatible endpoint
)Step 2. Add a field to ProvidersConfig in nanobot/config/schema.py:
class ProvidersConfig(BaseModel):
...
myprovider: ProviderConfig = ProviderConfig()That's it! Environment variables, model routing, config matching, and nanobot status display will all work automatically.
Common ProviderSpec options:
| Field | Description | Example |
|---|---|---|
default_api_base |
OpenAI-compatible base URL | "https://api.deepseek.com" |
env_extras |
Additional env vars to set | (("ZHIPUAI_API_KEY", "{api_key}"),) |
model_overrides |
Per-model parameter overrides | (("kimi-k2.5", {"temperature": 1.0}),) |
is_gateway |
Can route any model (like OpenRouter) | True |
detect_by_key_prefix |
Detect gateway by API key prefix | "sk-or-" |
detect_by_base_keyword |
Detect gateway by API base URL | "openrouter" |
strip_model_prefix |
Strip provider prefix before sending to gateway | True (for AiHubMix) |
supports_max_completion_tokens |
Use max_completion_tokens instead of max_tokens; required for providers that reject both being set simultaneously (e.g. VolcEngine) |
True |
Global settings that apply to all channels. Configure under the channels section in ~/.nanobot/config.json:
{
"channels": {
"sendProgress": true,
"sendReasoningSteps": true,
"sendToolHints": false,
"sendMaxRetries": 3,
"telegram": { ... }
}
}| Setting | Default | Description |
|---|---|---|
sendProgress |
true |
Stream agent's text progress to the channel |
sendReasoningSteps |
true |
Show visible reasoning-step progress while keeping other progress enabled |
sendToolHints |
false |
Stream tool-call hints (e.g. read_file("…")) |
sendMaxRetries |
3 |
Max delivery attempts per outbound message, including the initial send (0-10 configured, minimum 1 actual attempt) |
Set sendReasoningSteps to false when you want to keep final answers and tool hints, but hide the agent's intermediate reasoning-step display.
When sendProgress or sendReasoningSteps is false, streaming channels fall back to a final one-shot reply so partial reasoning or tool-planning text is not shown live.
When a channel send operation raises an error, nanobot retries with exponential backoff:
- Attempt 1: Initial send
- Attempts 2-4: Retry delays are 1s, 2s, 4s
- Attempts 5+: Retry delay caps at 4s
- Transient failures (network hiccups, temporary API limits): Retry usually succeeds
- Permanent failures (invalid token, channel banned): All retries fail
Note
When a channel is completely unavailable, there's no way to notify the user since we cannot reach them through that channel. Monitor logs for "Failed to send to {channel} after N attempts" to detect persistent delivery failures.
Tip
Use proxy in tools.web to route all web requests (search + fetch) through a proxy:
{ "tools": { "web": { "proxy": "http://127.0.0.1:7890" } } }Lumon AI supports multiple web search providers. Configure in ~/.nanobot/config.json under tools.web.search.
| Provider | Config fields | Env var fallback | Free |
|---|---|---|---|
brave (default) |
apiKey |
BRAVE_API_KEY |
No |
tavily |
apiKey |
TAVILY_API_KEY |
No |
jina |
apiKey |
JINA_API_KEY |
Free tier (10M tokens) |
searxng |
baseUrl |
SEARXNG_BASE_URL |
Yes (self-hosted) |
duckduckgo |
— | — | Yes |
When credentials are missing, Lumon AI automatically falls back to DuckDuckGo.
Brave (default):
{
"tools": {
"web": {
"search": {
"provider": "brave",
"apiKey": "BSA..."
}
}
}
}Tavily:
{
"tools": {
"web": {
"search": {
"provider": "tavily",
"apiKey": "tvly-..."
}
}
}
}Jina (free tier with 10M tokens):
{
"tools": {
"web": {
"search": {
"provider": "jina",
"apiKey": "jina_..."
}
}
}
}SearXNG (self-hosted, no API key needed):
{
"tools": {
"web": {
"search": {
"provider": "searxng",
"baseUrl": "https://searx.example"
}
}
}
}DuckDuckGo (zero config):
{
"tools": {
"web": {
"search": {
"provider": "duckduckgo"
}
}
}
}| Option | Type | Default | Description |
|---|---|---|---|
provider |
string | "brave" |
Search backend: brave, tavily, jina, searxng, duckduckgo |
apiKey |
string | "" |
API key for Brave or Tavily |
baseUrl |
string | "" |
Base URL for SearXNG |
maxResults |
integer | 5 |
Results per search (1–10) |
Tip
The config format is compatible with Claude Desktop / Cursor. You can copy MCP server configs directly from any MCP server's README.
nanobot supports MCP — connect external tool servers and use them as native agent tools.
Add MCP servers to your config.json:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
},
"my-remote-mcp": {
"url": "https://example.com/mcp/",
"headers": {
"Authorization": "Bearer xxxxx"
}
}
}
}
}Two transport modes are supported:
| Mode | Config | Example |
|---|---|---|
| Stdio | command + args |
Local process via npx / uvx |
| HTTP | url + headers (optional) |
Remote endpoint (https://mcp.example.com/sse) |
Use toolTimeout to override the default 30s per-call timeout for slow servers:
{
"tools": {
"mcpServers": {
"my-slow-server": {
"url": "https://example.com/mcp/",
"toolTimeout": 120
}
}
}
}Use enabledTools to register only a subset of tools from an MCP server:
{
"tools": {
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
"enabledTools": ["read_file", "mcp_filesystem_write_file"]
}
}
}
}enabledTools accepts either the raw MCP tool name (for example read_file) or the wrapped nanobot tool name (for example mcp_filesystem_write_file).
- Omit
enabledTools, or set it to["*"], to register all tools. - Set
enabledToolsto[]to register no tools from that server. - Set
enabledToolsto a non-empty list of names to register only that subset.
MCP tools are automatically discovered and registered on startup. The LLM can use them alongside built-in tools — no extra configuration needed.
Tip
For production deployments, set "restrictToWorkspace": true in your config to sandbox the agent.
In v0.1.4.post3 and earlier, an empty allowFrom allowed all senders. Since v0.1.4.post4, empty allowFrom denies all access by default. To allow all senders, set "allowFrom": ["*"].
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace |
false |
When true, restricts all agent tools (shell, file read/write/edit, list) to the workspace directory. Prevents path traversal and out-of-scope access. |
tools.exec.enable |
true |
When false, the shell exec tool is not registered at all. Use this to completely disable shell command execution. |
tools.exec.pathAppend |
"" |
Extra directories to append to PATH when running shell commands (e.g. /usr/sbin for ufw). |
channels.*.allowFrom |
[] (deny all) |
Whitelist of user IDs. Empty denies all; use ["*"] to allow everyone. |
Run multiple nanobot instances simultaneously with separate configs and runtime data. Use --config as the main entrypoint. For onboarding, you can use either --dir (recommended) or explicit --config + --workspace.
If you want each instance to have its own dedicated workspace from the start, use --dir.
Initialize instances:
# Create separate instance configs and workspaces under each directory
nanobot onboard --dir ~/.nanobot-telegram
nanobot onboard --dir ~/.nanobot-discord
nanobot onboard --dir ~/.nanobot-feishu
# Equivalent explicit form:
# nanobot onboard --config ~/.nanobot-telegram/config.json --workspace ~/.nanobot-telegram/workspaceConfigure each instance:
Edit ~/.nanobot-telegram/config.json, ~/.nanobot-discord/config.json, etc. with different channel settings. The workspace you passed during onboard is saved into each config as that instance's default workspace.
Run instances:
# Instance A - Telegram bot
nanobot gateway --config ~/.nanobot-telegram/config.json
# Instance B - Discord bot
nanobot gateway --config ~/.nanobot-discord/config.json
# Instance C - Feishu bot with custom port
nanobot gateway --config ~/.nanobot-feishu/config.json --port 18792When using --config, nanobot derives its runtime data directory from the config file location. The workspace still comes from agents.defaults.workspace unless you override it with --workspace.
To open a CLI session against one of these instances locally:
nanobot agent -c ~/.nanobot-telegram/config.json -m "Hello from Telegram instance"
nanobot agent -c ~/.nanobot-discord/config.json -m "Hello from Discord instance"
# Optional one-off workspace override
nanobot agent -c ~/.nanobot-telegram/config.json -w /tmp/nanobot-telegram-test
nanobot agentstarts a local CLI agent using the selected workspace/config. It does not attach to or proxy through an already runningnanobot gatewayprocess.
| Component | Resolved From | Example |
|---|---|---|
| Config | --config path |
~/.nanobot-A/config.json |
| Workspace | --workspace or config |
~/.nanobot-A/workspace/ |
| Cron Jobs | config directory | ~/.nanobot-A/cron/ |
| Media / runtime state | config directory | ~/.nanobot-A/media/ |
--configselects which config file to load- By default, the workspace comes from
agents.defaults.workspacein that config - If you pass
--workspace, it overrides the workspace from the config file
- Copy your base config into a new instance directory.
- Set a different
agents.defaults.workspacefor that instance. - Start the instance with
--config.
Example config:
{
"agents": {
"defaults": {
"workspace": "~/.nanobot-telegram/workspace",
"model": "anthropic/claude-sonnet-4-6"
}
},
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_TELEGRAM_BOT_TOKEN"
}
},
"gateway": {
"port": 18790
}
}Start separate instances:
nanobot gateway --config ~/.nanobot-telegram/config.json
nanobot gateway --config ~/.nanobot-discord/config.jsonOverride workspace for one-off runs when needed:
nanobot gateway --config ~/.nanobot-telegram/config.json --workspace /tmp/nanobot-telegram-test- Run separate bots for Telegram, Discord, Feishu, and other platforms
- Keep testing and production instances isolated
- Use different models or providers for different teams
- Serve multiple tenants with separate configs and runtime data
- Each instance must use a different port if they run at the same time
- Use a different workspace per instance if you want isolated memory, sessions, and skills
--workspaceoverrides the workspace defined in the config file- Cron jobs and runtime media/state are derived from the config directory
| Command | Description |
|---|---|
nanobot onboard |
Initialize config & workspace at ~/.nanobot/ |
nanobot onboard --wizard |
Launch the interactive onboarding wizard |
nanobot onboard --dir <dir> |
Initialize instance config/workspace under <dir> |
nanobot onboard -c <config> -w <workspace> |
Initialize or refresh a specific instance config and workspace |
nanobot agent -m "..." |
Chat with the agent |
nanobot agent -w <workspace> |
Chat against a specific workspace |
nanobot agent -w <workspace> -c <config> |
Chat against a specific workspace/config |
nanobot agent |
Interactive chat mode |
nanobot agent --no-markdown |
Show plain-text replies |
nanobot agent --logs |
Show runtime logs during chat |
nanobot gateway |
Start the gateway |
nanobot status |
Show status |
nanobot provider login openai-codex |
OAuth login for providers |
nanobot channels login <channel> |
Authenticate a channel interactively |
nanobot channels status |
Show channel status |
Interactive mode exits: exit, quit, /exit, /quit, :q, or Ctrl+D.
Heartbeat (Periodic Tasks)
The gateway wakes up every 30 minutes and checks HEARTBEAT.md in your workspace (~/.nanobot/workspace/HEARTBEAT.md). If the file has tasks, the agent executes them and delivers results to your most recently active chat channel.
Setup: edit ~/.nanobot/workspace/HEARTBEAT.md (created automatically by nanobot onboard):
## Periodic Tasks
- [ ] Check weather forecast and send a summary
- [ ] Scan inbox for urgent emailsThe agent can also manage this file itself — ask it to "add a periodic task" and it will update HEARTBEAT.md for you.
Note: The gateway must be running (
nanobot gateway) and you must have chatted with the bot at least once so it knows which channel to deliver to.
Tip
The -v ~/.nanobot:/root/.nanobot flag mounts your local config directory into the container, so your config and workspace persist across container restarts.
docker compose run --rm nanobot-cli onboard # first-time setup
vim ~/.nanobot/config.json # add API keys
docker compose up -d nanobot-gateway # start gatewaydocker compose run --rm nanobot-cli agent -m "Hello!" # run CLI
docker compose logs -f nanobot-gateway # view logs
docker compose down # stop# Build the image
docker build -t nanobot .
# Initialize config (first time only)
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot onboard
# Edit config on host to add API keys
vim ~/.nanobot/config.json
# Run gateway (connects to enabled channels, e.g. Telegram/Discord/Mochat)
docker run -v ~/.nanobot:/root/.nanobot -p 18790:18790 nanobot gateway
# Or run a single command
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot agent -m "Hello!"
docker run -v ~/.nanobot:/root/.nanobot --rm nanobot statusRun the gateway as a systemd user service so it starts automatically and restarts on failure.
1. Find the nanobot binary path:
which nanobot # e.g. /home/user/.local/bin/nanobot2. Create the service file at ~/.config/systemd/user/nanobot-gateway.service (replace ExecStart path if needed):
[Unit]
Description=Nanobot Gateway
After=network.target
[Service]
Type=simple
ExecStart=%h/.local/bin/nanobot gateway
Restart=always
RestartSec=10
NoNewPrivileges=yes
ProtectSystem=strict
ReadWritePaths=%h
[Install]
WantedBy=default.target3. Enable and start:
systemctl --user daemon-reload
systemctl --user enable --now nanobot-gatewayCommon operations:
systemctl --user status nanobot-gateway # check status
systemctl --user restart nanobot-gateway # restart after config changes
journalctl --user -u nanobot-gateway -f # follow logsIf you edit the .service file itself, run systemctl --user daemon-reload before restarting.
Note: User services only run while you are logged in. To keep the gateway running after logout, enable lingering:
loginctl enable-linger $USER
Use .github/workflows/deploy.yml to redeploy automatically when the Test Suite workflow passes on main.
Required repository secrets:
VPS_HOST(for example203.0.113.10)VPS_SSH_KEY(private key for SSH auth)VPS_KNOWN_HOSTS(host key line fromssh-keyscan)VPS_USER(optional, defaults toubuntu)VPS_SSH_PORT(optional, defaults to22)
Example to generate VPS_KNOWN_HOSTS locally:
ssh-keyscan -p 22 your-vps-hostnameThe deploy job SSHes into the VPS and runs:
cd /home/ubuntu/nanobot
git fetch origin main
git checkout main
git pull --ff-only origin main
uv tool install --reinstall --from /home/ubuntu/nanobot nanobot-ai
systemctl --user restart nanobot-gateway.servicePrerequisites on the VPS:
- Repo is cloned at
/home/ubuntu/nanobot uvis installed for theubuntuusernanobot-gateway.serviceis already set up as a user service- User lingering is enabled if the user session is not always logged in
nanobot/
├── agent/ # 🧠 Core agent logic
│ ├── loop.py # Agent loop (LLM ↔ tool execution)
│ ├── context.py # Prompt builder
│ ├── memory.py # Persistent memory
│ ├── skills.py # Skills loader
│ ├── subagent.py # Background task execution
│ └── tools/ # Built-in tools (incl. spawn)
├── skills/ # 🎯 Bundled skills (github, weather, tmux...)
├── channels/ # 📱 Chat channel integrations (supports plugins)
├── bus/ # 🚌 Message routing
├── cron/ # ⏰ Scheduled tasks
├── heartbeat/ # 💓 Proactive wake-up
├── providers/ # 🤖 LLM providers (OpenRouter, etc.)
├── session/ # 💬 Conversation sessions
├── config/ # ⚙️ Configuration
└── cli/ # 🖥️ Commands
PRs welcome! The codebase is intentionally small and readable. 🤗
Use main for all pull requests.
Unsure which branch to target? See CONTRIBUTING.md for details.
Roadmap - Pick an item and open a PR!
- Multi-modal — See and hear (images, voice, video)
- Long-term memory — Never forget important context
- Better reasoning — Multi-step planning and reflection
- More integrations — Calendar and more
- Self-improvement — Learn from feedback and mistakes



