A high-performance matching engine written in Rust, designed to handle tens of thousands of orders per second using Event Sourcing pattern with CQRS framework.
+----------+
Client Request Queue >> | Relayer | >> Event Logs
+----------+
|
v
[ PostgreSQL DB ]
|
v
[ Redis Cache ]
The relayer core implements:
- Event Sourcing: All state changes are stored as events
- CQRS Pattern: Command Query Responsibility Segregation
- High-Performance Matching: Handles thousands of orders per second
- Real-time Processing: WebSocket connections for live price feeds
- Blockchain Integration: ZKOS chain transaction support
Before running the relayer core, ensure you have the following installed:
- Rust (1.87.0+) - Install Rust
- Docker & Docker Compose - Install Docker
- PostgreSQL (13+)
- Redis (6+)
- Apache Kafka with Zookeeper
git clone https://github.com/twilight-project/relayer-core.git
cd relayer-coreCreate your environment configuration file:
cp .env.example .envEdit the .env file with your specific configuration:
nano .envStart the required services using Docker Compose:
# Start Kafka, Zookeeper
docker compose up --build kafka zookeeperCreate the necessary Kafka topics for message queuing:
# Create all required topics
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CLIENT-REQUEST --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic SnapShotLogTopic --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CoreEventLogTopic --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic RelayerStateQueue --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CLIENT-FAILED-REQUEST --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime"# Build the project
cargo build
# Run the relayer core
cargo run --bin main# Build optimized release
cargo build --release
# Run the optimized binary
./target/release/main# Build and run all services
docker-compose up --buildThe relayer core uses environment variables for configuration. Here are the key categories:
# Kafka broker
BROKER=localhost:9092
# Kafka topics
RPC_CLIENT_REQUEST=CLIENT-REQUEST
CORE_EVENT_LOG=CoreEventLogTopic
SNAPSHOT_LOG=SnapShotLogTopic
RELAYER_STATE_QUEUE=RelayerStateQueue
# Relayer admin server settings
RELAYER_SERVER_SOCKETADDR=0.0.0.0:3031
RELAYER_SERVER_THREAD=2# Trading fees (as percentages)
FILLED_ON_MARKET=0.04
FILLED_ON_LIMIT=0.02
SETTLED_ON_MARKET=0.04
SETTLED_ON_LIMIT=0.02# Wallet security (REQUIRED)
RELAYER_WALLET_IV=your_wallet_iv_here
RELAYER_WALLET_SEED=your_wallet_seed_here
RELAYER_WALLET_PATH=/path/to/wallet/file
RELAYER_WALLET_PASSWORD=your_wallet_password_here
# Blockchain transactions
ENABLE_ZKOS_CHAIN_TRANSACTION=trueTo create a template for new deployments:
# Copy your configured .env to create an example
cp .env .env.example
# Remove sensitive information from .env.example
sed -i 's/=.*/=/' .env.examplecargo buildcargo build --release- Start Dependencies:
docker compose builddocker compose up --build kafka zookeeperdocker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CLIENT-REQUEST --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic SnapShotLogTopic --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CoreEventLogTopic --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic RelayerStateQueue --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime" && \
docker exec -it zookeeper sh -c "cd usr/bin && kafka-topics --topic CLIENT-FAILED-REQUEST --create --zookeeper zookeeper:2181 --partitions 1 --replication-factor 1 --config retention.ms=-1 --config cleanup.policy=compact --config message.timestamp.type=LogAppendTime"chmod -R 777 dockerize/redisDB/log/docker compose up --build postgresql-master redis-server databasedocker compose up --build rpckafka querykafkapsqldocker compose up --build auth api archiver frontenddocker compose up --build relayer-devFor production deployments, use Supervisor for process management:
# Install Supervisor
sudo apt update
sudo apt install supervisor
# Create log directory
mkdir -p /home/ubuntu/relayer-core/logs
# Configure Supervisor
sudo tee /etc/supervisor/conf.d/relayer.conf > /dev/null <<EOF
[program:relayer]
command=/home/ubuntu/relayer-core/target/release/main
directory=/home/ubuntu/relayer-core
autostart=true
autorestart=true
stderr_logfile=/home/ubuntu/relayer-core/logs/relayer.err.log
stdout_logfile=/home/ubuntu/relayer-core/logs/relayer.out.log
user=ubuntu
environment=HOME="/home/ubuntu"
stderr_logfile_maxbytes=50MB
stdout_logfile_maxbytes=50MB
stderr_logfile_backups=10
stdout_logfile_backups=10
EOF
# Update and start Supervisor
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start relayerThe relayer core provides several API endpoints:
- Internal relayer communication
- State synchronization
- Event broadcasting
- Fee Management
For detailed API documentation, refer to the Postman Collection.
# Start all services
docker-compose up -d# Start only the relayer core
docker-compose up -d relayer-core
# Start only dependencies
docker-compose up -d kafka zookeeper# Build with standard Dockerfile
docker build -t relayer-core .
## π Monitoring and Logging
### Log Files
- **Application Logs**: `./logs/relayer.out.log`
- **Error Logs**: `./logs/relayer.err.log`
- **Rust Logs**: Configure with `RUST_LOG` environment variable
### Log Levels
```bash
# Set log level
export RUST_LOG=info # info, debug, warn, error, trace
export RUST_BACKTRACE=full # Enable full backtracesThe relayer provides health check endpoints:
- HTTP health check on configured ports
- Kafka connectivity status
- Database connection status
- Redis connection status
cargo testcargo test --test integration# Use provided Postman collection for load testing
# Configure concurrent requests based on your requirements- Never commit wallet seeds or passwords to version control
- Use strong, randomly generated passwords
- Regularly rotate wallet credentials
- Backup wallet files securely
- Configure firewall rules for exposed ports
- Use TLS/SSL for external connections
- Implement rate limiting
- Monitor for suspicious activities
- Use strong database passwords
- Enable database encryption at rest
- Implement proper access controls
- Regular security updates
Minimum Requirements:
- CPU: 4 cores
- RAM: 8GB
- Storage: 100GB SSD
- Network: 100Mbps
Recommended Requirements:
- CPU: 8+ cores
- RAM: 16GB+
- Storage: 500GB+ NVMe SSD
- Network: 1Gbps+
# Increase thread counts for high load
RPC_SERVER_THREAD=20
RELAYER_SERVER_THREAD=4
# Optimize database connections
# Configure connection pooling in your database URLs
# Tune Kafka settings
# Increase partitions for higher throughputsrc/
βββ main.rs # Application entry point
βββ lib.rs # Library exports
βββ config/ # Configuration handling
βββ database/ # Database interactions
βββ kafka/ # Kafka message handling
βββ matching/ # Order matching engine
βββ rpc/ # RPC server implementation
βββ websocket/ # WebSocket connections
βββ utils/ # Utility functions
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
- Issues: GitHub Issues
- Documentation: Project Wiki
- Community: Discord Server
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Initial release
- Core matching engine implementation
- Event sourcing architecture
- CQRS pattern implementation
- Docker deployment support
- Supervisor integration
Note: This is a high-performance trading system. Always test thoroughly in a development environment before deploying to production.