-
Notifications
You must be signed in to change notification settings - Fork 38
Description
Hi,
Actually I have 3 swarm clusters, all separated in different VPC on AWS. One of the swarm is for the tools (Prometheus, Grafana, Jenkins, and so on). The 2 others are for production and staging.
Today, it's setup like this: on AWS I first created private hosted zones for my production and staging stack. On those hosted zones I created SRV entries for my exporters (node-exporters, cadvisor, etc..). Then I created scrape config that point to those SRV entries to dynamically get the list of internal node IPs so prometheus can scrape metrics from them.
This is not ideal because the list of IPs (on the SRV entries) are not build up dynamically as I add/remove nodes from my clusters. To be dynamic it would require to create a service that listen on docker events to know when nodes are added or removed to update those SRV entries. Moreover, if dockerflow/docker-flow-monitor restart, it does not get alerted from those 2 stacks because it looks only on the current swarm cluster it's running on.
To have my alerts from production and staging I have to restart dockerflow/docker-flow-swarm-listener on those 2 stacks so it send the alerts config to prometheus.
Ideally dockerflow/docker-flow-monitor should accept a list of LISTENER_ADDRESS to call dockerflow/docker-flow-swarm-listener on multiple hosts. Same for the DF_GET_NODES_URL.
What do you think? Does it makes sense for you? That would be very helpful!