A B-Tree based key-value store with copy-on-write pages and a Raft-backed distributed mode for linearizable writes and optional follower reads.
- Raft Consensus: Linearizable writes with HashiCorp Raft implementation
- B-Tree Storage: Efficient key-value storage with copy-on-write pages
- HTTP API: RESTful interface for all operations
- Kubernetes Native: Production-ready Helm charts with automated scaling
- High Availability: Smart quorum management for distributed consensus
- Consistent Reads: Linearizable leader reads, optional stale follower reads
- Zero-Downtime Scaling: Automated node addition/removal during scaling operations
- Interactive Shell:
conuresh
REPL for easy database interaction
- Writes: Linearizable via Raft (acknowledged after commit on quorum)
- Reads:
- Leader reads: Linearizable (API issues a Raft barrier)
- Follower reads: Eventually consistent with
stale=true
parameter
# Run single node
docker run -d --name conuredb \
-p 8081:8081 \
conuredb/conuredb:latest \
--node-id=node1 --bootstrap
# Test the API
curl -X PUT "http://localhost:8081/kv?key=hello&value=world"
curl "http://localhost:8081/kv?key=hello"
# Use the interactive shell
docker exec -it conuredb conuresh
ConureDB provides production-ready Helm charts for Kubernetes deployment:
# Add the Helm repository (when available)
helm repo add conuredb https://charts.conuredb.dev
helm repo update
# Install single-node for development
helm install conuredb conuredb/conuredb-single
# Install HA cluster for production (minimum 3 nodes)
helm install conuredb conuredb/conuredb-ha \
--set voters.replicas=3 \
--set voters.pvc.size=20Gi
# Clone the repository
git clone https://github.com/conuredb/conuredb.git
cd conure-db
# Build binaries
go build ./cmd/conure-db
go build ./cmd/conuresh
# Run tests
go test ./...
ConureDB uses a multi-component architecture optimized for Kubernetes:
- Single Process: One binary handles both bootstrap and storage
- File-based Storage: Local B-Tree files with Raft logs
- Bootstrap Node: Dedicated StatefulSet for cluster initialization
- Voter Nodes: Scalable StatefulSet for data storage and voting
- Smart Scaling: Automated Raft membership management during scaling
- Minimum cluster size: 3 nodes for production deployments
- Majority required: Floor(N/2) + 1 nodes must be available for writes
ConureDB supports both YAML configuration files and command-line flags.
Create a config.yaml
file:
node_id: node1
data_dir: ./data/node1
raft_addr: 127.0.0.1:7001
http_addr: :8081
bootstrap: true
barrier_timeout: 3s
Flags override YAML configuration:
--config
string: Path to YAML configuration file--node-id
string: Unique node identifier (stable across restarts)--data-dir
string: Directory for database and Raft state--raft-addr
string: Raft bind/advertise address (host:port)--http-addr
string: HTTP API bind address--bootstrap
: Bootstrap single-node cluster if no existing state--barrier-timeout
duration: Leader read barrier timeout (e.g.,3s
)
If not specified anywhere:
node_id=node1
data_dir=./data
raft_addr=127.0.0.1:7001
http_addr=:8081
bootstrap=true
barrier_timeout=3s
# Start a single-node cluster
./conure-db --node-id=node1 --data-dir=./data/node1 --bootstrap
# Put a key-value pair
curl -X PUT 'http://localhost:8081/kv?key=hello&value=world'
# Get the value
curl 'http://localhost:8081/kv?key=hello'
# Check cluster status
curl 'http://localhost:8081/status'
# Terminal 1: Start bootstrap node
./conure-db --node-id=node1 --data-dir=./data/node1 \
--raft-addr=127.0.0.1:7001 --http-addr=:8081 --bootstrap
# Terminal 2: Start second node
./conure-db --node-id=node2 --data-dir=./data/node2 \
--raft-addr=127.0.0.1:7002 --http-addr=:8082
# Terminal 3: Start third node
./conure-db --node-id=node3 --data-dir=./data/node3 \
--raft-addr=127.0.0.1:7003 --http-addr=:8083
# Join nodes to cluster (from any terminal)
curl -X POST 'http://localhost:8081/join' \
-H 'Content-Type: application/json' \
-d '{"ID":"node2","RaftAddr":"127.0.0.1:7002"}'
curl -X POST 'http://localhost:8081/join' \
-H 'Content-Type: application/json' \
-d '{"ID":"node3","RaftAddr":"127.0.0.1:7003"}'
# Verify cluster configuration
curl 'http://localhost:8081/raft/config'
Method | Endpoint | Description | Example |
---|---|---|---|
PUT |
/kv?key=<key>&value=<value> |
Store key-value pair | PUT /kv?key=user&value=alice |
PUT |
/kv?key=<key> (body) |
Store with request body | PUT /kv?key=config + JSON body |
GET |
/kv?key=<key> |
Get value (linearizable) | GET /kv?key=user |
GET |
/kv?key=<key>&stale=true |
Get value (eventually consistent) | GET /kv?key=user&stale=true |
DELETE |
/kv?key=<key> |
Delete key | DELETE /kv?key=user |
Method | Endpoint | Description | Response |
---|---|---|---|
GET |
/status |
Get node and leader status | {"is_leader":true,"leader":"..."} |
GET |
/raft/config |
Get cluster membership | List of nodes with IDs and addresses |
GET |
/raft/stats |
Get Raft statistics | Detailed Raft metrics |
POST |
/join |
Add node to cluster | {"ID":"node2","RaftAddr":"..."} |
POST |
/remove |
Remove node from cluster | {"ID":"node2"} |
# Store data
curl -X PUT "http://localhost:8081/kv?key=app&value=conuredb"
# Read data
curl "http://localhost:8081/kv?key=app"
# Cluster operations
curl "http://localhost:8081/status"
curl "http://localhost:8081/raft/config"
ConureDB includes a remote shell that connects to the HTTP API:
# Connect to default server (localhost:8081)
./conuresh
# Connect to specific server
./conuresh --server=http://127.0.0.1:8081
# When using Docker
docker exec -it <container-name> conuresh
put <key> <value>
- Store a key-value pairget <key>
- Retrieve a valuedelete <key>
- Delete a keyhelp
- Show available commandsexit
- Exit the shell
The shell automatically follows leader redirects and handles cluster topology changes.
ConureDB provides two Helm charts optimized for different use cases:
- Use case: Development, testing, demo environments
- Features: Single replica, immediate bootstrap, minimal resources
- Scaling: Prevents scaling beyond 1 replica
# Install single-node chart
helm install mydb ./charts/conuredb-single
# Customization
helm install mydb ./charts/conuredb-single \
--set image.tag=v1.0.0 \
--set pvc.size=5Gi
- Use case: Production environments requiring high availability
- Features: Multi-node cluster with automated scaling
- Scaling: Minimum 3 nodes, automated Raft membership management
# Install HA cluster (3 nodes minimum)
helm install mydb ./charts/conuredb-ha \
--set voters.replicas=3
# Production configuration
helm install mydb ./charts/conuredb-ha \
--set voters.replicas=5 \
--set voters.pvc.size=100Gi \
--set voters.pvc.storageClassName=fast-ssd \
--set voters.resources.requests.cpu=500m \
--set voters.resources.requests.memory=1Gi
The HA chart supports zero-downtime scaling:
# Scale up (adds nodes to Raft cluster automatically)
helm upgrade mydb ./charts/conuredb-ha --set voters.replicas=5
# Scale down (removes nodes from Raft cluster first)
helm upgrade mydb ./charts/conuredb-ha --set voters.replicas=3
# Port forward for local access
kubectl port-forward svc/conure 8081:8081
# Direct pod access
kubectl exec -it conure-bootstrap-0 -- \
curl "http://localhost:8081/status"
# Service endpoints
kubectl get endpoints conure
# values-dev.yaml
voters:
replicas: 3
resources:
requests:
cpu: 100m
memory: 128Mi
pvc:
size: 1Gi
# values-prod.yaml
voters:
replicas: 5
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1000m
memory: 2Gi
pvc:
size: 100Gi
storageClassName: fast-ssd
security:
runAsNonRoot: true
runAsUser: 1000
fsGroup: 1000
pdb:
enabled: true
nodeSelector:
node-type: database
tolerations:
- key: database
operator: Equal
value: "true"
effect: NoSchedule
# Check cluster health
kubectl exec conure-bootstrap-0 -- \
curl -s "http://localhost:8081/raft/config" | jq '.'
# View Raft statistics
kubectl exec conure-bootstrap-0 -- \
curl -s "http://localhost:8081/raft/stats" | jq '.'
# Check pod status
kubectl get pods -l app.kubernetes.io/name=conuredb
# View logs
kubectl logs -f conure-bootstrap-0
kubectl logs -l app.kubernetes.io/name=conuredb --tail=100
Symptoms: /status
shows empty leader, cluster appears stuck
Causes & Solutions:
- Bootstrap not applied: Ensure clean
data_dir
on first start with--bootstrap
- Lost quorum: Maintain odd number of voters (3, 5, 7) for natural majority
- Network partition: Check connectivity between nodes
# Check cluster configuration
curl "http://localhost:8081/raft/config"
# Verify all nodes are reachable
curl "http://localhost:8081/status"
curl "http://localhost:8082/status"
Symptoms: Follower reads with stale=true
return "key not found"
Explanation: Expected behavior briefly after leader writes; followers will catch up
Solution: Use leader reads for guaranteed consistency:
# Guaranteed consistent read (from leader)
curl "http://localhost:8081/kv?key=mykey"
# Eventually consistent read (from follower)
curl "http://localhost:8082/kv?key=mykey&stale=true"
Symptoms: Logs show heartbeat failures to nodes that should be removed
Causes & Solutions:
- Removal not committed: Ensure removal was issued to the correct leader
- Wrong node ID: Verify node IDs match exactly in
/raft/config
- Quorum lost during removal: Membership changes require active quorum
# Verify cluster membership before removal
curl "http://localhost:8081/raft/config"
# Remove node properly
curl -X POST "http://localhost:8081/remove"
-H 'Content-Type: application/json'
-d '{"ID":"exact-node-id-from-config"}'
Symptoms: Multiple database files, startup errors
Solution: Each node needs unique --data-dir
:
# Correct: unique directories
./conure-db --node-id=node1 --data-dir=./data/node1
./conure-db --node-id=node2 --data-dir=./data/node2
# Incorrect: shared directory
./conure-db --node-id=node1 --data-dir=./data # ❌
./conure-db --node-id=node2 --data-dir=./data # ❌
# Check if bootstrap node is ready
kubectl get pods -l app.kubernetes.io/name=conuredb
kubectl logs conure-bootstrap-0
# Check init container logs
kubectl logs conure-0 -c wait-for-bootstrap
# Check pre-scale job logs
kubectl logs job/conure-pre-scale-<revision>
# Verify current cluster state
kubectl exec conure-bootstrap-0 --
curl -s "http://localhost:8081/raft/config"
# Check StatefulSet status
kubectl describe statefulset conure
# Check PVC status
kubectl get pvc
# Verify storage class exists
kubectl get storageclass
# Check pod events
kubectl describe pod conure-0
# Local debugging
curl "http://localhost:8081/raft/stats" # Detailed Raft metrics
curl "http://localhost:8081/raft/config" # Current cluster membership
curl "http://localhost:8081/status" # Leader status
# Kubernetes debugging
kubectl exec conure-bootstrap-0 -- curl -s "http://localhost:8081/raft/stats"
kubectl logs -f conure-bootstrap-0
kubectl describe pod conure-bootstrap-0
- Go 1.23.0 or later
- Docker (for containerized testing)
- Kubernetes cluster (for Helm chart testing)
# Install dependencies
go mod download
# Build all binaries
go build ./...
# Build specific components
go build -o bin/conure-db ./cmd/conure-db
go build -o bin/conuresh ./cmd/repl
# Cross-compilation
GOOS=linux GOARCH=amd64 go build -o bin/conure-db-linux ./cmd/conure-db
# Run all tests
go test ./...
# Run tests with coverage
go test -cover ./...
# Run tests with race detection
go test -race ./...
# Run specific package tests
go test ./pkg/api
go test ./btree
# Benchmark tests
go test -bench=. ./btree
# Build Docker image
docker build -t conuredb/conuredb:dev .
# Run containerized tests
docker run --rm conuredb/conuredb:dev go test ./...
# Run a container and access the shell
docker run -d --name conuredb-test -p 8081:8081 conuredb/conuredb:dev --bootstrap
docker exec -it conuredb-test conuresh
# Multi-stage build for minimal image
docker build --target=production -t conuredb/conuredb:latest .
# Lint charts
helm lint charts/conuredb-single
helm lint charts/conuredb-ha
# Template generation (dry-run)
helm template test charts/conuredb-ha --debug
# Install in development namespace
helm install dev charts/conuredb-ha
--namespace dev
--create-namespace
--set image.tag=dev
# Test scaling
helm upgrade dev charts/conuredb-ha --set voters.replicas=5
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes and add tests
- Ensure all tests pass:
go test ./...
- Commit your changes:
git commit -m 'Add amazing feature'
- Push to your branch:
git push origin feature/amazing-feature
- Open a Pull Request
- Performance optimization: B-Tree improvements, Raft tuning
- Security features: Authentication, encryption at rest
- Monitoring: Metrics, observability improvements
- Documentation: API examples, deployment guides
- Testing: Integration tests, chaos engineering
- Client libraries: SDK for various programming languages
This project is licensed under the MIT License - see the LICENSE file for details.
- Documentation: GitHub Wiki
- Docker Images: Docker Hub
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Security: Security Policy