KMS is a collection of kafka tools for monitoring and visualizing a Kafka cluster status.
After a lot of search about how to monitor kafka, it turns out that there are many ways but it was very difficult to setup the configiration. The project target is to create a simple and robust environment running those tools
- Burrow:
- Burrow connects to Kafka brokers and Zookeeper hosts and gather data about Kafka.
- Expose REST api at http://localhost:8000/v3/kafka
- README.md
- Burrow-dashboard
- Angular JS frontend.
- Consumes the Burrow's REST api's. to provide kafka statistics and graphs.
- README.md
- Burrow-exporter
- Burrow by itself cannot expose its data as prometheus formatted metrics.
- Acts as data transformer.
- Scrapes Burrow and expose its data as prometheus formatted metrics.
- README.md
- Kafka-minion
- Scrapes directly from Kafka brokers.
- Not using zookeeper.
- a different approach from Burrow.
- equivalent to Burrow + Burrow-exporter.
- README.md
- Prometheus
- Acts as a datasource for Grafana.
- Aggregates all metrics from downstream targets.
- Expose Burrow metrics and Kafa-minion metrics.
- README.md
- Grafana
- Visualization of data.
- Graphs, Alerts, Dashboards, Queries, etc.
- Uses Prometheus as datasource.
- README.md
KMS uses docker-compose technology to wire up the underlying docker containers. the docker-compose.yml file contains the KMS services and configurations.
The 'Complete' setup is used by default (6 containers). if u want a different setup, just comment out the unnecesary services from docker-compose.yml
A running kafka is required.
KMS monitors kafka in 3 different ways:
- Using Burrow and Burrow-Dashboard
- Using Burrow on Grafana
- Using Kafka-minion on Grafana
- burrow pull/exposes kafka data
- burrow-dashboard visualizes burrow
- kafka-minion scrapes kafka
- prometheus scrapes kafka-minion
- grafana visualizes prometheus
- burrow pulls kafka data
- burrow-exporter scrapes burrow
- prometheus scrapes burrow-exporter
- grafana visualizes prometheus
- burrow pulls kafka data
- burrow-exporter scrapes burrow
- prometheus scrapes burrow-exporter
- kafka-minion scrapes kafka
- prometheus scrapes kafka-minion
- grafana visualizes prometheus
- burrow pulls/exposes kafka data
- burrow-dashboard visualizes burrow
- burrow-exporter scrapes burrow
- prometheus scrapes burrow-exporter
- kafka-minion scrapes kafka
- prometheus scrapes kafka-minion
- grafana visualizes prometheus
Some services can accept environment variables so there is no need to build a docker image (just pull and inject variables)
-
Grafana
- grafana folder contains provisioning files of preconfigured data source and dashboard. those files get mounted at container startup
-
Prometheus
- prometheus.yml config file get mounted at container startup
-
Burrow-dashboard
- Burrow's address set by environment variables at container startup
-
Borrow
- Dockerfile changed to use a different location for burrow.toml .
- kafka and zookeeper addresses injected by docker-compose as build args and replaced in the burrow.toml config file when building the docker image.
-
Borrow-exporter
- remove env vars from Dockerfile.
- env vars injected by docker-compose as build args.
-
Kafka-minion
- building Dockerfile // TODO: check if necessary //.
- env vars injected by docker-compose to the running container.
$ ./start-kms.sh
- Chen Alkabets - Initial work - chenchuk77
KMS is a collection of 3 forked projects (and 3 another 3 pre-built images). Each forked project has its own license untouched as appeared when forked. Im not taking any credit for those. With that said, This project is all about orchestration of multi-container services using Docker-compose technology.
- Burrow - Kafka Consumer Lag Checking.
- Burrow-dashboard - Kafka Dashboard for Burrow.
- Burrow_exporter - Prometheus exporter for burrow.
- Kafka-minion - Prometheus exporter for Apache Kafka.
- Prometheus - Systems and service monitoring system.
- Grafana - Grafana allows you to query, visualize, alert on and understand your metrics.
- Burrow-on-grafana setup has extra container. this is because burrow cant expose metrics in Prometheus format.
- Note the convention through this project for clarification:
- kafka data - row data from kafka
- kafka metrics - prometheus formated data
- scrape - read data and expose as prometheus formatted metric.
- Each Prometheus scrapper (prometheus/kafka-minion/burrow-exporter) is listening on /metrics
- http://localhost:9090/metrics # Prometheus
- http://localhost:7070/metrics # Kafka-minion
- http://localhost:8880/metrics # Burrow-exporter
- Support dynamic kafka zookeeper connection-string (variable num of brokers)
- Update system picture
- Support profiles (setup )
- Add grafana dashboards for provisioning