OS2IoT is an open-source IoT device management platform developed by OS2 (Denmark's public sector open-source organization). This repository provides a production-ready Kubernetes deployment using GitOps principles with ArgoCD.
This documentation is written for DevOps engineers and system administrators who are comfortable with Kubernetes and Helm. Following GitOps principles, all configuration lives in this Git repository - ArgoCD watches for changes and automatically syncs them to your cluster.
| Component | Description |
|---|---|
| OS2IoT Platform | Web-based IoT device management frontend and API backend |
| ChirpStack | LoRaWAN network server for sensor connectivity |
| Mosquitto | MQTT brokers (internal for ChirpStack, device-facing with auth) |
| PostgreSQL | Shared database cluster via CloudNativePG |
| Kafka/Zookeeper | Message streaming infrastructure |
| Traefik | Ingress controller with TLS termination |
| Supporting Infrastructure | ArgoCD, cert-manager, sealed-secrets |
graph TB
subgraph "External Traffic"
LB[Load Balancer]
GW[LoRaWAN Gateways]
end
subgraph "Ingress Layer"
TR[Traefik]
end
subgraph "Applications"
FE[os2iot-frontend]
BE[os2iot-backend]
CS[ChirpStack]
CGW[ChirpStack Gateway]
end
subgraph "Message Brokers"
MQ1[Mosquitto<br/>ChirpStack]
MQ2[Mosquitto Broker<br/>Devices]
KF[Kafka]
ZK[Zookeeper]
end
subgraph "Data Layer"
PG[(PostgreSQL)]
RD[(Redis)]
end
LB --> TR
GW -->|UDP 1700| TR
TR --> FE
TR --> BE
TR --> CS
TR -->|UDP| CGW
FE --> BE
BE --> PG
BE --> KF
BE --> CS
CS --> PG
CS --> RD
CS --> MQ1
CGW --> MQ1
MQ2 --> PG
KF --> ZK
- Prerequisites
- Quick Start
- Cloud Provider Configuration
- Installation Guide
- Architecture Reference
- Configuration Reference
- Operations
- Troubleshooting
- Contributing
- License
The deployment relies on a standard Kubernetes toolchain. You'll use kubectl to interact with your cluster, helm to
package and deploy applications, and kubeseal to encrypt secrets so they can be safely stored in Git. Each tool plays
a specific role in the GitOps workflow.
| Tool | Minimum Version | Purpose |
|---|---|---|
| Kubernetes | 1.26+ | Container orchestration |
| Helm | 3.12+ | Chart management |
| kubectl | 1.26+ | Cluster interaction |
| kubeseal | 0.24+ | Secret encryption |
| git | 2.0+ | GitOps workflow |
| Resource | Minimum | Recommended |
|---|---|---|
| Nodes | 3 | 3+ (for HA) |
| CPU per node | 4 cores | 8 cores |
| Memory per node | 8 GB | 16 GB |
| Storage | 50 GB | 100 GB+ |
The following ports must be accessible from the internet:
| Port | Protocol | Purpose |
|---|---|---|
| 80 | TCP | HTTP (redirects to HTTPS) |
| 443 | TCP | HTTPS |
| 1700 | UDP | LoRaWAN gateway traffic |
| 8884 | TCP | MQTT with client certificates |
| 8885 | TCP | MQTT with username/password |
Throughout this documentation, placeholders are used:
| Variable | Description | Example |
|---|---|---|
<FQDN> |
Fully qualified domain name | iot.example.com |
<CERT_MAIL> |
Email for Let's Encrypt | [email protected] |
Generate secure passwords with:
echo "$(cat /dev/urandom | tr -dc 'a-f0-9' | fold -w 32 | head -n 1)"The bootstrap process installs ArgoCD first, which then takes over and automatically deploys all other applications in the correct order. Once complete, ArgoCD continuously monitors this Git repository and applies any changes you commit.
# Clone and configure
git clone https://github.com/os2iot/OS2IoT-helm.git
cd OS2IoT-helm
# Edit configuration
# 1. Set your domain in applications/argo-cd/values.yaml
# 2. Set your repo URL in applications/argo-cd-resources/values.yaml
# 3. Set your email in applications/cert-manager/templates/cluster-issuer.yaml
# Bootstrap everything
./bootstrap.shkubectl port-forward svc/argo-cd-argocd-server -n argo-cd 8443:443
# Open https://localhost:8443
# Get password:
kubectl -n argo-cd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo# Terminal 1: Backend
kubectl port-forward -n os2iot-backend svc/os2iot-backend-svc 3000:3000
# Terminal 2: Frontend
kubectl port-forward -n os2iot-frontend svc/os2iot-frontend-svc 8081:8081
# Open http://localhost:8081
# Login: [email protected] / hunter2| Script | Purpose |
|---|---|
./bootstrap.sh |
Full automated cluster bootstrap |
./seal-secrets.sh |
Generate and seal all secrets |
./generate-chirpstack-api-key.sh |
Generate ChirpStack API key |
./bootstrap-os2iot-org.sh |
Create default organization |
./uninstall.sh |
Full cleanup and removal |
These scripts automate repetitive tasks and encode best practices. Use them instead of running manual commands where possible.
Three things vary between cloud providers: how persistent storage is provisioned, how load balancers are created, and how nodes are selected for scheduling. This deployment is pre-configured for Hetzner Cloud via Cloudfleet.ai, but the sections below show how to adapt it for AWS, GCP, Azure, or bare metal environments.
- Hetzner API Token: Required for CSI driver to provision volumes
- Cloudfleet cluster: Nodes must have the label
cfke.io/provider: hetzner
The cluster-resources application deploys the Hetzner CSI driver:
- StorageClass:
hcloud-volumes(default) - Volume binding:
WaitForFirstConsumer - Reclaim policy:
Retain
Limitations:
- Hetzner volumes can only attach to Hetzner nodes
ReadWriteManyis NOT supported (useReadWriteOnce)
cd applications/cluster-resources
mkdir -p local-secrets
cat > local-secrets/hcloud-token.yaml << EOF
apiVersion: v1
kind: Secret
metadata:
name: hcloud
namespace: kube-system
type: Opaque
stringData:
token: "YOUR_HETZNER_API_TOKEN"
EOF
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/hcloud-token.yaml > templates/hcloud-token-sealed-secret.yamlCloudfleet automatically provisions Hetzner Load Balancers. Do not use Hetzner CCM - it conflicts with Cloudfleet's controller.
Get the LoadBalancer IP for DNS:
kubectl get svc traefik -n traefikConfigure DNS A records pointing to the Traefik LB IP:
your-domain.com A <traefik-lb-ip>
*.your-domain.com A <traefik-lb-ip>
All applications are pre-configured to run in fsn1 region via nodeSelector. To change:
# In each application's values.yaml
nodeSelector:
topology.kubernetes.io/region: nbg1 # Change region
# Or disable region restriction:
nodeSelector: { }For non-Hetzner deployments, you need to configure three components:
| Component | Hetzner | AWS EKS | GKE | AKS | Bare Metal |
|---|---|---|---|---|---|
| CSI Driver | hcloud-csi | aws-ebs-csi | Built-in | Built-in | Longhorn/OpenEBS |
| StorageClass | hcloud-volumes | gp3 | pd-standard | managed-premium | longhorn |
| Load Balancer | Cloudfleet auto | AWS LB Controller | Built-in | Built-in | MetalLB |
| Node Selector | fsn1 region | Remove or use zones | Remove or use zones | Remove or use zones | Remove |
-
Disable Hetzner CSI in
applications/cluster-resources/values.yaml:hcloud-csi: enabled: false
-
Install AWS EBS CSI Driver:
eksctl create addon --name aws-ebs-csi-driver --cluster <cluster-name>
-
Create StorageClass and update
applications/postgres/values.yaml:cluster: storage: storageClass: "gp3"
-
Install AWS Load Balancer Controller and update Traefik:
traefik: service: annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
-
Remove nodeSelectors from all applications or use availability zones.
See AWS EBS CSI Driver documentation for details.
-
Disable Hetzner CSI (same as AWS)
-
GKE includes CSI driver by default - create StorageClass if needed:
storageClass: "pd-standard"
-
Update Traefik for GKE load balancer:
traefik: service: annotations: cloud.google.com/load-balancer-type: "External"
-
Remove nodeSelectors or use GKE zones.
See GKE persistent volumes documentation for details.
-
Disable Hetzner CSI (same as AWS)
-
AKS includes Azure Disk CSI - use built-in StorageClass:
storageClass: "managed-premium"
-
Configure static IP if needed:
az network public-ip create --name os2iot-ip --resource-group <rg> --allocation-method Static
-
Remove nodeSelectors or use AKS zones.
See AKS storage documentation for details.
-
Install Longhorn for storage:
helm repo add longhorn https://charts.longhorn.io helm install longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
Update
applications/postgres/values.yaml:cluster: storage: storageClass: "longhorn"
-
Install MetalLB for LoadBalancer:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.5/config/manifests/metallb-native.yaml
Configure IP pool:
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: default namespace: metallb-system spec: addresses: - 192.168.1.200-192.168.1.250 # Adjust to your network --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: default namespace: metallb-system
-
Remove all nodeSelectors from application values.yaml files.
See Longhorn documentation and MetalLB documentation for details.
The installation follows a specific sequence: ArgoCD goes first because it manages all other deployments, then Sealed Secrets so you can encrypt credentials, then your sealed secrets must be committed to Git so they're available when ArgoCD deploys the applications that need them.
The bootstrap.sh script handles the complete installation:
./bootstrap.shThe script will:
- Verify prerequisites (kubectl, helm, kubeseal)
- Install ArgoCD
- Install Sealed Secrets
- Generate and seal all secrets
- Prompt you to commit sealed secrets to Git
- Install ArgoCD resources (app-of-apps)
Before running, configure:
applications/argo-cd/values.yaml- Setglobal.domainapplications/argo-cd-resources/values.yaml- SetrepoUrlapplications/cert-manager/templates/cluster-issuer.yaml- Set email
If you prefer manual control, follow this exact sequence:
helm repo add argocd https://argoproj.github.io/argo-helm
helm repo update
cd applications/argo-cd
helm dependency build
kubectl create namespace argo-cd
helm template argo-cd . -n argo-cd | kubectl apply -f -Verify ArgoCD is running:
kubectl port-forward svc/argo-cd-argocd-server -n argo-cd 8443:443
# Open https://localhost:8443
# Username: admin
# Password: kubectl -n argo-cd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -dcd applications/sealed-secrets
helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets
helm repo update
helm dependency build
kubectl create namespace sealed-secrets
helm template sealed-secrets . -n sealed-secrets | kubectl apply -f -
# Wait for controller
kubectl wait --for=condition=available --timeout=300s deployment/sealed-secrets -n sealed-secrets./seal-secrets.shThe script seals all secrets. If any contain placeholder values, it will warn you to update them first.
git add applications/*/templates/*-sealed-secret.yaml
git commit -m "Add sealed secrets for applications"
git pushcd applications/argo-cd-resources
helm template argo-cd-resources . -n argo-cd | kubectl apply -f -ArgoCD will now automatically sync all applications.
Sync waves ensure that dependencies are deployed before the applications that need them. For example, the PostgreSQL operator must be running before you can create a PostgreSQL cluster, and the database must exist before applications can connect to it. ArgoCD deploys applications in waves to enforce this ordering. All apps have automatic retry (5 attempts, 30s-5m exponential backoff) to handle transient failures like webhook unavailability during operator startup.
| Wave | Applications | Purpose |
|---|---|---|
| 0 | cluster-resources |
CSI driver and StorageClasses |
| 1 | argo-cd, argo-cd-resources, traefik, cert-manager, sealed-secrets |
Core infrastructure |
| 2 | cloudnative-pg-operator, redis-operator |
Operators (CRDs and webhooks) |
| 3 | postgres |
Database cluster |
| 4 | mosquitto, zookeeper |
Message brokers |
| 5 | chirpstack, chirpstack-gateway, kafka |
Apps depending on brokers/databases |
| 6 | mosquitto-broker, os2iot-backend |
Apps depending on postgres |
| 7 | os2iot-frontend |
Frontend |
Services follow the pattern {app-name}-svc.{namespace}:
| Service | Address |
|---|---|
| PostgreSQL (read-write) | postgres-cluster-rw.postgres:5432 |
| PostgreSQL (read-only) | postgres-cluster-ro.postgres:5432 |
| Mosquitto (ChirpStack) | mosquitto-svc.mosquitto:1883 |
| Mosquitto (Devices) | mosquitto-broker-svc.mosquitto-broker:8884/8885 |
| Kafka | kafka-svc.kafka:9092 |
| Zookeeper | zookeeper-svc.zookeeper:2181 |
| ChirpStack | chirpstack-clusterip-svc.chirpstack:8081 |
| Type | Used By | Command |
|---|---|---|
| Kubernetes Ingress | argo-cd, chirpstack | kubectl get ingress -A |
| Traefik IngressRoute | os2iot-frontend | kubectl get ingressroute -A |
| Traefik IngressRouteUDP | LoRaWAN | kubectl get ingressrouteudp -A |
The platform uses a shared PostgreSQL cluster managed by CloudNativePG. A single database cluster simplifies operations and backup management. Because Kubernetes enforces namespace isolation for security, each application that needs database access requires its own copy of the credentials secret in its namespace - this is why you'll see the same password defined in multiple secret files.
| User | Purpose | Database | Access |
|---|---|---|---|
os2iot |
OS2IoT backend (owner) | os2iot | Full (owner) |
chirpstack |
ChirpStack LoRaWAN server | os2iot | Full (granted) |
mqtt |
Mosquitto broker authentication | os2iot | Read-only (SELECT) |
Database credentials must be sealed for both the postgres namespace (for role creation) and application namespaces (for deployment access).
Create the following files in applications/postgres/local-secrets/:
chirpstack-user-secret.yaml (for postgres namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-chirpstack
namespace: postgres
type: Opaque
stringData:
username: chirpstack
password: <GENERATE_SECURE_PASSWORD>chirpstack-user-secret-for-chirpstack-ns.yaml (for chirpstack namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-chirpstack
namespace: chirpstack
type: Opaque
stringData:
username: chirpstack
password: <SAME_PASSWORD_AS_ABOVE>os2iot-user-secret.yaml (for postgres namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-os2iot
namespace: postgres
type: Opaque
stringData:
username: os2iot
password: <GENERATE_SECURE_PASSWORD>os2iot-user-secret-for-backend-ns.yaml (for os2iot-backend namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-os2iot
namespace: os2iot-backend
type: Opaque
stringData:
username: os2iot
password: <SAME_PASSWORD_AS_ABOVE>mqtt-user-secret.yaml (for postgres namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-mqtt
namespace: postgres
type: Opaque
stringData:
username: mqtt
password: <GENERATE_SECURE_PASSWORD>mqtt-user-secret-for-broker-ns.yaml (for mosquitto-broker namespace):
apiVersion: v1
kind: Secret
metadata:
name: postgres-cluster-mqtt
namespace: mosquitto-broker
type: Opaque
stringData:
username: mqtt
password: <SAME_PASSWORD_AS_ABOVE>cd applications/postgres
# Seal secrets for postgres namespace
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/chirpstack-user-secret.yaml > templates/chirpstack-user-sealed-secret.yaml
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/os2iot-user-secret.yaml > templates/os2iot-user-sealed-secret.yaml
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/mqtt-user-secret.yaml > templates/mqtt-user-sealed-secret.yaml
# Seal secrets for application namespaces
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/chirpstack-user-secret-for-chirpstack-ns.yaml > ../chirpstack/templates/postgres-cluster-chirpstack-sealed-secret.yaml
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/os2iot-user-secret-for-backend-ns.yaml > ../os2iot-backend/templates/postgres-cluster-os2iot-sealed-secret.yaml
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/mqtt-user-secret-for-broker-ns.yaml > ../mosquitto-broker/templates/postgres-cluster-mqtt-sealed-secret.yaml| Setting | Value |
|---|---|
| Host (read-write) | postgres-cluster-rw.postgres |
| Host (read-only) | postgres-cluster-ro.postgres |
| Port | 5432 |
| Database | os2iot |
The backend is the central API server that also acts as a certificate authority for IoT device authentication. It requires several secrets: a CA certificate and key for signing device certificates, an encryption key for protecting sensitive data in the database, SMTP credentials for sending notifications, and a ChirpStack API key for communicating with the LoRaWAN network server.
The backend needs a CA certificate and key for device authentication (MQTT client certificates).
cd applications/os2iot-backend/local-secrets
# Generate CA private key (with password encryption)
openssl genrsa -aes256 -passout pass:<CA_KEY_PASSWORD> -out ca.key 4096
# Generate CA certificate (valid for 10 years)
openssl req -new -x509 -days 3650 -key ca.key -passin pass:<CA_KEY_PASSWORD> -out ca.crt \
-subj "/CN=OS2IoT-Device-CA/O=OS2IoT/C=DK"Create applications/os2iot-backend/local-secrets/ca-keys.yaml:
apiVersion: v1
kind: Secret
metadata:
name: ca-keys
namespace: os2iot-backend
type: Opaque
stringData:
password: "<CA_KEY_PASSWORD>"
ca.crt: |
-----BEGIN CERTIFICATE-----
<contents of ca.crt>
-----END CERTIFICATE-----
ca.key: |
-----BEGIN ENCRYPTED PRIVATE KEY-----
<contents of ca.key>
-----END ENCRYPTED PRIVATE KEY-----cd applications/os2iot-backend
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/ca-keys.yaml > templates/ca-keys-sealed-secret.yamlThe backend uses a symmetric encryption key for encrypting sensitive data in the database.
1. Create the secret file
Create applications/os2iot-backend/local-secrets/encryption-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: os2iot-backend-encryption
namespace: os2iot-backend
type: Opaque
stringData:
symmetricKey: "<GENERATE_32_CHAR_HEX_KEY>"2. Seal the secret
cd applications/os2iot-backend
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/encryption-secret.yaml > templates/encryption-sealed-secret.yamlThe backend uses SMTP for sending emails (password resets, notifications).
1. Create the secret file
Create applications/os2iot-backend/local-secrets/email-secret.yaml:
apiVersion: v1
kind: Secret
metadata:
name: os2iot-backend-email
namespace: os2iot-backend
type: Opaque
stringData:
user: "<YOUR_SMTP_USERNAME>"
pass: "<YOUR_SMTP_PASSWORD>"2. Seal the secret
cd applications/os2iot-backend
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< local-secrets/email-secret.yaml > templates/email-sealed-secret.yaml3. Configure SMTP host and port
Update applications/os2iot-backend/values.yaml:
os2iotBackend:
email:
host: smtp.example.com
port: "587"
from: "[email protected]"The backend runs database migrations on startup. If the container fails to start, use these commands to diagnose the issue.
# View logs from current container
kubectl logs -n os2iot-backend -l app=os2iot-backend
# View logs from previous crashed container
kubectl logs -n os2iot-backend -l app=os2iot-backend --previousThe container is configured with terminationMessagePolicy: FallbackToLogsOnError, which captures the last log output
on failure:
kubectl describe pod -n os2iot-backend -l app=os2iot-backendLook for the Last State section to see the termination reason and message.
npm writes detailed logs to /home/node/.npm/_logs/. These are persisted in an emptyDir volume and can be accessed if
the container is in CrashLoopBackOff:
# List available log files
kubectl exec -n os2iot-backend <pod-name> -- ls -la /home/node/.npm/_logs/
# View a specific log file
kubectl exec -n os2iot-backend <pod-name> -- cat /home/node/.npm/_logs/<log-file>.log
# Copy all npm logs locally
kubectl cp os2iot-backend/<pod-name>:/home/node/.npm/_logs ./npm-logs| Symptom | Likely Cause | Solution |
|---|---|---|
| SIGTERM during migrations | Startup probe timeout | Increase failureThreshold in deployment |
| Database connection refused | PostgreSQL not ready | Check postgres-cluster pods and secrets |
| Missing secret key | Sealed secret not deployed | Verify sealed secrets exist in namespace |
The backend requires a Network Server (Admin) API key from ChirpStack to communicate with the LoRaWAN network server for device management and data retrieval.
Important: This must be a Network Server API key (not a Tenant API key), as the backend queries gateways and devices across the entire ChirpStack instance.
After ChirpStack is deployed, run the helper script:
./generate-chirpstack-api-key.shThis script will:
- Connect to the running ChirpStack pod
- Generate a Network Server API key via ChirpStack CLI
- Automatically create/update
applications/os2iot-backend/local-secrets/chirpstack-api-key.yaml - Display next steps for sealing and committing
Then seal and commit the secret:
./seal-secrets.sh
git add applications/os2iot-backend/templates/chirpstack-api-key-sealed-secret.yaml
git commit -m "Add ChirpStack API key"
git push-
Port-forward to ChirpStack:
kubectl port-forward svc/chirpstack-clusterip-svc -n chirpstack 8080:8081
-
Login at http://localhost:8080 (admin/admin)
-
Navigate to Network Server → API Keys (NOT Tenant API Keys)
-
Create key, copy the token immediately
-
Create
applications/os2iot-backend/local-secrets/chirpstack-api-key.yaml:apiVersion: v1 kind: Secret metadata: name: chirpstack-api-key namespace: os2iot-backend type: Opaque stringData: apiKey: "<YOUR_CHIRPSTACK_API_KEY>"
-
Seal and commit.
Verify Configuration:
Ensure applications/os2iot-backend/values.yaml has the correct ChirpStack service URL:
os2iotBackend:
chirpstack:
hostname: "chirpstack-clusterip-svc.chirpstack"
port: "8081"The backend will automatically use the chirpstack-api-key secret for authentication.
After the backend is deployed, create a default organization.
A Kubernetes Job automatically creates a default organization. Verify:
kubectl logs job/os2iot-backend-bootstrap -n os2iot-backend./bootstrap-os2iot-org.sh- Email:
[email protected] - Password:
hunter2
Change the default password immediately after first login!
Both port-forwards are required:
# Terminal 1: Backend API
kubectl port-forward -n os2iot-backend svc/os2iot-backend-svc 3000:3000
# Terminal 2: Frontend
kubectl port-forward -n os2iot-frontend svc/os2iot-frontend-svc 8081:8081Open: http://localhost:8081
This deployment includes two separate Mosquitto instances that serve different purposes. The first (mosquitto) handles
internal ChirpStack LoRaWAN traffic and runs without encryption since it only accepts connections from within the
cluster. The second (mosquitto-broker) is the external-facing broker for IoT devices, secured with TLS and
authenticating clients against the PostgreSQL database.
| Port | Description |
|---|---|
| 8884 | MQTT with client certificate authentication |
| 8885 | MQTT with username/password authentication |
Configure the PostgreSQL connection in your values override or via ArgoCD:
mosquittoBroker:
database:
host: "postgres-cluster-rw.postgres"
port: "5432"
username: "mqtt"
password: "your-password"
name: "os2iot"
sslMode: "disable" # or "verify-ca" for productionThe broker requires TLS certificates for secure MQTT communication. Certificates are stored as Kubernetes Secrets and managed via SealedSecrets.
| Secret Name | Keys | Description |
|---|---|---|
ca-keys |
ca.crt |
CA certificate for client verification |
server-keys |
server.crt, server.key |
Server certificate and private key |
cd applications/mosquitto-broker/local-secrets
# Generate CA
openssl genrsa -out ca.key 4096
openssl req -new -x509 -days 3650 -key ca.key -out ca.crt \
-subj "/CN=OS2IoT-Mosquitto-CA/O=OS2IoT/C=DK"
# Generate server certificate
openssl genrsa -out server.key 4096
openssl req -new -key server.key -out server.csr \
-subj "/CN=mosquitto-broker/O=OS2IoT/C=DK"
openssl x509 -req -days 3650 -in server.csr \
-CA ca.crt -CAkey ca.key -CAcreateserial -out server.crt
rm server.csr ca.srlObtain certificates from a trusted CA and place in applications/mosquitto-broker/local-secrets/.
Create applications/mosquitto-broker/local-secrets/ca-keys.yaml:
apiVersion: v1
kind: Secret
metadata:
name: ca-keys
namespace: mosquitto-broker
type: Opaque
stringData:
ca.crt: |
-----BEGIN CERTIFICATE-----
<your CA certificate content>
-----END CERTIFICATE-----Create applications/mosquitto-broker/local-secrets/server-keys.yaml:
apiVersion: v1
kind: Secret
metadata:
name: server-keys
namespace: mosquitto-broker
type: Opaque
stringData:
server.crt: |
-----BEGIN CERTIFICATE-----
<your server certificate content>
-----END CERTIFICATE-----
server.key: |
-----BEGIN PRIVATE KEY-----
<your server private key content>
-----END PRIVATE KEY-----Seal:
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< applications/mosquitto-broker/local-secrets/ca-keys.yaml > applications/mosquitto-broker/templates/ca-keys-sealed-secret.yaml
kubeseal --format yaml \
--controller-name=sealed-secrets \
--controller-namespace=sealed-secrets \
< applications/mosquitto-broker/local-secrets/server-keys.yaml > applications/mosquitto-broker/templates/server-keys-sealed-secret.yamlCommit only the sealed secrets - the local-secrets/ directory is gitignored.
To rotate certificates:
-
Generate new certificates following the steps above
-
Create new sealed secrets
-
Commit and push - ArgoCD will automatically deploy the updated secrets
-
Restart the broker pod:
kubectl rollout restart deployment/mosquitto-broker -n mosquitto-broker
Day-to-day operations focus on monitoring ArgoCD sync status and pod health. CloudNativePG handles database maintenance (backups, failover, connection pooling) automatically, so you primarily need to watch for application-level issues and ensure ArgoCD can reach your Git repository.
# Check all ArgoCD applications
kubectl get applications -n argo-cd
# Watch sync status
watch kubectl get applications -n argo-cd
# Check all pods
kubectl get pods -A
# Check PostgreSQL cluster
kubectl get clusters -n postgresEnable backups in applications/postgres/values.yaml:
backups:
enabled: true
provider: s3
s3:
bucket: "your-backup-bucket"
region: "eu-west-1"
retentionPolicy: "30d"
schedule: "0 0 * * *" # Daily at midnight- Update chart versions in
Chart.yamlfiles - Rebuild dependencies:
helm dependency build - Commit and push - ArgoCD auto-syncs
- Monitor:
kubectl get applications -n argo-cd
Full cleanup:
./uninstall.shManual (preserves data):
kubectl delete applications --all -n argo-cdWhen something goes wrong, start by running kubectl get pods -A to find pods that aren't Running or Ready. Once you've
identified the unhealthy pod, check its logs and describe output to understand what's failing. The table below covers
common issues, but the debug commands that follow are useful for investigating any problem.
| Symptom | Likely Cause | Solution |
|---|---|---|
| Frontend shows CORS errors | Backend not accessible | Run BOTH backend and frontend port-forwards |
| Pod stuck in Pending | No storage available | Check StorageClass and PVC status |
| CrashLoopBackOff | Database not ready | Check postgres-cluster pods in postgres namespace |
| Sealed Secret not decrypting | Wrong controller namespace | Verify sealed-secrets in sealed-secrets namespace |
| ArgoCD sync failed | Webhook not ready | Wait and retry; check operator pods |
| LoadBalancer stuck in Pending | No LB provider | Install MetalLB (bare metal) or verify cloud provider |
| SIGTERM during migrations | Startup probe timeout | Increase failureThreshold in deployment |
# View pod logs
kubectl logs -n <namespace> -l app=<app-name> --tail=100
# View previous crashed container logs
kubectl logs -n <namespace> -l app=<app-name> --previous
# Describe failing pod
kubectl describe pod -n <namespace> <pod-name>
# Check events
kubectl get events -n <namespace> --sort-by='.lastTimestamp'
# Test database connectivity
kubectl exec -n postgres -it postgres-cluster-1 -- psql -U postgres -c "SELECT 1"
# Test sealed-secrets controller
kubeseal --fetch-cert --controller-name=sealed-secrets --controller-namespace=sealed-secrets
# View all ingress resources
kubectl get ingress -A
kubectl get ingressroute -A
kubectl get ingressrouteudp -A# View container logs
kubectl logs -n os2iot-backend -l app=os2iot-backend
# View termination message
kubectl describe pod -n os2iot-backend -l app=os2iot-backend
# Access npm debug logs (if in CrashLoopBackOff)
kubectl exec -n os2iot-backend <pod-name> -- ls -la /home/node/.npm/_logs/Before submitting changes, test your Helm templates locally with helm template <chart-name> applications/<chart-name>/
to catch syntax errors early.
- Fork the repository
- Create a feature branch
- Make your changes
- Test with
helm template - Submit a pull request
This project is licensed under the MPL-2.0 License. See the OS2IoT project for details.