DirectFileStore : add hostname to identify a process#323
DirectFileStore : add hostname to identify a process#323BenoitMC wants to merge 1 commit intoprometheus:mainfrom BenoitMC:add_hostname
Conversation
|
This seems like the wrong solution to the problem. Files should not be share across containers. Each container should have an internal metrics store. |
|
In your opinion, is this a standard way of running Ruby apps? It feels like an unusual setup to me, but I may be lacking exposure to real situations. Unless this is very common, I feel like it would be better for you to subclass DirectFileStore, override this method, and then set your subclass as the store in the config. That would achieve the same outcome for you without changing the gem for all users. I'd also suggest not overriding process_id, but instead overriding the method that generated the actual file path (filemap_filename) |
|
You don't want volume mount the metrics file store. It is standard to use a tmpfs memory store inside each container. Otherwise you can't keep the metrics lifecycle synced with the container lifecycle. |
|
Thank you for your responses. I’m just getting started with Prometheus. The idea was to have X Sidekiq containers storing their metrics in the same folder (but in separate files), which would then be aggregated and scraped through a single target. Should I instead separate them and have as many targets as there are containers? |
Our application runs on a single physical host but in multiple Docker containers that share the same persistent volume (e.g., multiple Sidekiq or Puma + Sidekiq). Due to PID namespace isolation, the Ruby process PID inside each container is the same. As a result, the
DirectFileStoreclass ends up using the same files across containers, which leads to file corruptions during concurrent writes.This PR modifies the process identifier to include the hostname, ensuring that each container writes to a distinct file and preventing cross-container write conflicts.