Askaosus is A Python Matrix bot that searches a Discourse forum to answer questions with intelligent context handling.
- Responds to mentions in Matrix rooms
- Smart context handling: Automatically includes replied-to messages when mentioned in replies
- Searches Discourse forum posts to find relevant answers
- Supports multiple LLM providers (OpenAI, OpenRouter, Gemini)
- Configurable responses: Customizable bot messages in multiple languages
- Robust error handling with graceful fallbacks
- Configurable through environment variables
- Dockerized for easy deployment
- Stateless design with persistent Matrix session
-
Copy
.env.example
to.env
and configure your settings -
Start the bot with Docker Compose:
docker compose up -d
tags: latest/main and there's a tag for each release
For development with hot reloading (no Docker needed):
# Install dependencies including dev support
pip install -r requirements.txt
# Run the bot with auto-reload on file changes
python dev.py
The bot operates as follows:
- Listening: Monitors Matrix rooms for messages mentioning the bot
- Question Detection: Extracts the question from the message or reply
- Context Analysis: When mentioned in a reply, automatically fetches and includes the original message as context
- Discourse Search: Searches the configured Discourse forum for relevant posts
- AI Processing: Uses the configured LLM to generate a helpful answer based on forum content
- Response: Sends a concise response with links to relevant forum posts
The bot responds to:
- Direct mentions:
@botname How do I install software?
- Name mentions:
botname What is the best text editor?
- Replies with context: Reply to any message with
@botname
to automatically include the original message as context for better understanding
When you mention the bot in a reply to another message, it automatically:
- Fetches the original message content
- Combines both the original message and your reply as context
- Provides more targeted and relevant search results
- Handles errors gracefully (provides fallback context if original message can't be retrieved)
Example:
User A: "I'm having trouble with my graphics driver"
User B: "@botname can you help with this?" (as a reply)
The bot receives both messages and searches for graphics driver issues specifically.
The bot provides:
- A concise answer in the same language as the question
- References to relevant Discourse posts
- Direct links for further reading
- Context-aware responses when replying to conversations
The bot is configured through environment variables that can be set in your shell or directly in the Docker Compose files. All configuration options have sensible defaults where possible.
Variable | Description | Default | Required |
---|---|---|---|
MATRIX_HOMESERVER_URL |
The Matrix homeserver URL | https://matrix.org |
✅ |
MATRIX_USER_ID |
The bot's Matrix user ID (e.g., @mybot:matrix.org ) |
@mybot:matrix.org |
✅ |
MATRIX_PASSWORD |
The bot's Matrix account password | - | ✅ |
MATRIX_DEVICE_NAME |
Device name for the Matrix session | discourse-bot-python |
❌ |
MATRIX_STORE_PATH |
Path to store Matrix session data | /app/data/matrix_store |
❌ |
Variable | Description | Default | Required |
---|---|---|---|
DISCOURSE_BASE_URL |
The Discourse forum base URL | https://discourse.example.org |
❌ |
DISCOURSE_API_KEY |
Discourse API key for authenticated requests | - | ❌ |
DISCOURSE_USERNAME |
Discourse username for API authentication | - | ❌ |
Note: API key and username are optional but recommended for higher rate limits and access to private content.
Variable | Description | Default | Required |
---|---|---|---|
LLM_PROVIDER |
LLM provider (openai , openrouter , gemini ) |
openai |
❌ |
LLM_API_KEY |
API key for the LLM provider | - | ✅ |
LLM_BASE_URL |
Custom API endpoint URL | Auto-detected based on provider | ❌ |
LLM_MODEL |
Model name to use | gpt-4 |
❌ |
LLM_MAX_TOKENS |
Maximum tokens in response | 500 |
❌ |
LLM_TEMPERATURE |
Response creativity (0.0-1.0) | 0.7 |
❌ |
- Set
LLM_PROVIDER=openai
- Uses
https://api.openai.com/v1
as base URL - Models:
gpt-4
,gpt-4-turbo
,gpt-3.5-turbo
, etc.
- Set
LLM_PROVIDER=openrouter
- Uses
https://openrouter.ai/api/v1
as base URL - Models: Any model available on OpenRouter (e.g.,
openai/gpt-4
,anthropic/claude-3-sonnet
) - Automatically includes required headers
- Set
LLM_PROVIDER=gemini
- Uses
https://generativelanguage.googleapis.com/v1beta
as base URL - Models:
gemini-pro
,gemini-pro-vision
, etc.
Variable | Description | Default | Required |
---|---|---|---|
BOT_MENTIONS |
Comma-separated list of mentions that trigger the bot | @askaosus,askaosus |
❌ |
BOT_RATE_LIMIT_SECONDS |
Minimum seconds between responses | 1.0 |
❌ |
BOT_MAX_SEARCH_RESULTS |
Maximum Discourse posts to search | 5 |
❌ |
BOT_DEBUG |
Enable debug mode | false |
❌ |
BOT_MAX_SEARCH_ITERATIONS |
Maximum number of search iterations | 3 |
❌ |
BOT_UTM_TAGS |
UTM parameters to add to shared links (format: utm_source=bot&utm_medium=matrix&utm_campaign=help ) |
"" |
❌ |
Variable | Description | Default | Required |
---|---|---|---|
LOG_LEVEL |
Logging level (DEBUG , INFO , WARNING , ERROR ) |
INFO |
❌ |
The bot's responses can be customized through a responses.json
file. This allows changing error messages, labels, and system responses without modifying code.
- Location: The bot looks for
responses.json
in/app/responses.json
(production) or current directory (development) - Multi-language: Supports Arabic and English responses with automatic fallback
- Categories: Error messages, Discourse labels, and system messages
- Documentation: See docs/configurable-responses.md for full details
If no configuration file is found, the bot uses built-in default responses.
export MATRIX_HOMESERVER_URL="https://matrix.org"
export MATRIX_USER_ID="@mybot:matrix.org"
export MATRIX_PASSWORD="your_matrix_password"
export LLM_PROVIDER="openai"
export LLM_API_KEY="sk-..."
export LLM_MODEL="gpt-4"
export BOT_MENTIONS="@mybot,mybot"
export DISCOURSE_BASE_URL="https://discourse.example.org"
docker compose up -d
export MATRIX_HOMESERVER_URL="https://matrix.example.org"
export MATRIX_USER_ID="@mybot:example.org"
export MATRIX_PASSWORD="your_password"
export LLM_PROVIDER="openrouter"
export LLM_API_KEY="sk-or-v1-..."
export LLM_MODEL="openai/gpt-4-turbo"
export DISCOURSE_BASE_URL="https://discourse.example.org"
export BOT_MENTIONS="@mybot,mybot"
export BOT_UTM_TAGS="utm_source=matrixbot&utm_medium=matrix&utm_campaign=support"
docker compose up -d
export MATRIX_PASSWORD="your_password"
export LLM_API_KEY="your_api_key"
# Use development compose with hot reload
docker compose -f docker-compose.dev.yml up --watch
- Check that the bot is properly logged into Matrix:
docker compose logs
- Verify the Matrix credentials are correct
- Ensure the bot has permission to read messages in the room
- Verify
DISCOURSE_BASE_URL
is correct - Check if Discourse API credentials are valid (optional but recommended)
- Test the Discourse search manually:
https://discourse.aosus.org/search.json?q=your_query
- Verify the API key is valid and has sufficient credits
- Check the model name is correct for your provider
- Ensure the base URL is appropriate for your provider
- Increase
BOT_RATE_LIMIT_SECONDS
if responses are too frequent - Check your LLM provider's rate limits
- Monitor logs for rate limit errors
# View current logs
docker compose logs -f
# View development logs
docker compose -f docker-compose.dev.yml logs -f
# Check specific container logs
docker logs matrix-discourse-bot
Enable debug mode for detailed logging:
export BOT_DEBUG=true
export LOG_LEVEL=DEBUG
Or use the development compose file which has debug enabled by default.
- Fork the repository
- Create a feature branch
- Make your changes
- Test using the development environment
- Submit a pull request
This project is licensed under the Apache-2.0 License - see the LICENSE file for details.