title | date | draft |
---|---|---|
Adopters |
2021-03-08 23:50:39 +0100 |
false |
This document tracks people and use cases for the Prometheus Operator in production. By creating a list of production use cases we hope to build a community of advisors that we can reach out to with experience using various the Prometheus Operator applications, operation environments, and cluster sizes. The Prometheus Operator development team may reach out periodically to check-in on how the Prometheus Operator is working in the field and update this list.
Go ahead and add your organization to the list.
Environments: AWS, Azure, Bare Metal
Uses kube-prometheus: Yes (with additional tight Giant Swarm integrations)
Details:
- One prometheus operator per management cluster and one prometheus instance per workload cluster
- Customers can also install kube-prometheus for their workload using our App Platform
- 760000 samples/s
- 35M active series
Environments: Google Cloud
Uses kube-prometheus: Yes (with additional Gitpod mixins)
Details:
- One prometheus instance per cluster (8 so far)
- 20000 samples/s
- 1M active series
https://kinvolk.io/lokomotive-kubernetes/
Environments: AKS, AWS, Bare Metal, Equinix Metal
Uses kube-prometheus: Yes
Details:
- Self-hosted (control plane runs as pods inside the cluster)
- Deploys full K8s stack (as a distro) or managed Kubernetes (currently only AKS supported)
- Deployed by Kinvolk for its own hosted infrastructure (including Flatcar Container Linux update server), as well as by Kinvolk customers and community users
Environments: AWS
Uses kube-prometheus: Yes
Details:
- One prometheus operator in our platform cluster and one prometheus instance per workload cluster
- 17k samples/s
- 841k active series
Environment: Google Cloud
Uses kube-prometheus: Yes
Details:
- 100k samples/s
- 1M active series
Environments: AWS, Azure, Google Cloud, Bare Metal
Uses kube-prometheus: Yes (with additional tight OpenShift integrations)
This is a meta user; please feel free to document specific OpenShift users!
All OpenShift clusters use the Prometheus Operator to manage the cluster monitoring stack as well as user workload monitoring. This means the Prometheus Operator's users include all OpenShift customers.
Environments: AWS, Google Cloud
Uses kube-prometheus: No
Opstrace installations use the Prometheus Operator internally to collect metrics and to alert. Opstrace users also often use the Prometheus Operator to scrape their own aplications and remote_write those metrics to Opstrace.
Environment: Google Cloud
Uses kube-prometheus: Yes
Details:
- HA Pair of Prometheus
- 4000 samples/s
- 100k active series
Environment: AWS
Uses kube-prometheus: Yes
Details (optional):
- HA Pairs of Prometheus
- 25000 samples/s
- 1.2M active series
Environments: Bare Metal
Uses kube-prometheus: Yes
Details (optional):
- HA Pair of Prometheus
- 517000 samples/s
- 10.7M active series
Environments: AWS, Azure, Google Cloud, cloudscale.ch, Exoscale, Swisscom
Uses kube-prometheus: Yes
Details (optional):
- A huge fleet of OpenShift and Kubernetes clusters, each using Prometheus Operator
- All managed by Project Syn, leveraging Commodore Components like component-rancher-monitoring which re-uses Prometheus Operator