Trying to fight against a epidemy, we must provide with on demand medicine tabs production.
Our system must allow additional patients to request tabs without human manual scale up action, nor service level degradation.
Please see :
- Functional specs for a quick overview of what the system is doing, and main problems that it overcomes
- Technical specs for in-depth description
Technical stack :
- Microk8s running with
microk8s start
KEDA
microk8s addon enabled- Docker
kubectl
bash
Optional resources :
- Eventual external Kafka service credentials (not needed in current automatic setup, only needed for external Kafka service)
This quickstart automatic procedure will leave you with following resources up and running, with ONE patient producing tabs orders :
A convenience script is available for bash. That shell script performs all setup tasks
./make_infra.sh
See detailed instructions and learn what this convenience script sets up for you :
- deploys Kafka service to kubernetes
- deploys KEDA facilities to kubernetes (minikube only)
- creates required topics in Kafka
A convenience build script is available for bash and minikube. That shell script performs all build tasks
./make_build.sh
This script will be used by developers, as it updates application logic Docker images, in k8s
See detailed build instructions and learn what this convenience build script does for you :
- starts a local docker registry in minikube (minikube only)
- builds patient and medicine Docker images
- and tags images to local minikube Docker registry
A convenience deploy script is available for bash. That shell script performs all deploy tasks
./make_deploy.sh
See detailed deploy instructions and learn what this convenience deploy script does for you :
- deploys patient as a deployment to kubernetes
- deploys medicine as a scaledJob to kubernetes
This is the most convenient way to see thing working, as flow from every POD is visible there
Use any kafka POD for easy access to Kafka topics, eg. to list all topics :
kubectl exec -it medicine-pubsub-kafka-0 \
-- bin/kafka-topics.sh --list --bootstrap-server medicine-pubsub-kafka-bootstrap:9092
Optionally log in to Kafka bin with kubectl exec -it medicine-pubsub-kafka-0 /bin/bash
and use same command from pod's console
Tabs orders produced by patient(s) can be seen going through tabs.orders
topic :
kubectl exec -it medicine-pubsub-kafka-1 \
-- bin/kafka-console-consumer.sh --bootstrap-server medicine-pubsub-kafka-bootstrap:9092 --topic tabs.orders
These are the actual produced medicine tabs going through
Tabs items produced by medicine worker(s) can be seen going through tabs.deliveries
topic :
kubectl exec -it medicine-pubsub-kafka-1 \
-- bin/kafka-console-consumer.sh --bootstrap-server medicine-pubsub-kafka-bootstrap:9092 --topic tabs.deliveries
kubectl exec -it medicine-pubsub-kafka-0 -- bin/kafka-consumer-groups.sh --bootstrap-server medicine-pubsub-kafka-bootstrap:9092 --describe --group tabs_makers
This can be tricky, as each POD has logs, one would have to manually inspect multiple PODs
Get to know your setup POD names with kubectl get pods
and find medicine-maker-0-* and/or patient-0-* PODs.
Get log of patient service in each patient POD :
kubectl logs -f <patient-0 pod name>
Get log of patient service in each medicine maker POD :
kubectl logs -f <medicine-maker-0 pod name>
See how pods are being orchestrated with kubectl get pods
This current automatic setup leaves you with only ONE running patient. One can add additional patients by adding replicas to the patient-0 deployment:
kubectl scale -n default deployment patient-0 --replicas=<new replica count>
The medicine tabs production jobs count will scale accordingly (up or down). Indeed, setting replicas to 0 does pause patient tabs orders production. However, medicine workers will catch up enventually buffered orders.
- Remove Kafka and use Redis as a PubSub service ; get rid of partitions count workers limit
- Better use DLQ and implement a recovery strategy by retrying saved failed payloads