-
Notifications
You must be signed in to change notification settings - Fork 801
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handlig of disappeared labelsets #1047
Comments
Just noted, that when using a custom collector, all labelset+metric combinations that are not explicitly re-set in the Main problem again (see #1045), AFAICS, this is nowhere documented - it may not even be intended behaviour but just some current implementation detail. I leave the issue open for now, because there's still no solution for (a) (i.e. when doing direct instrumentation). |
Was just reading through the documentation issue you created. As you figured out, this is the use case for a custom collector. It is the intended behavior that only metrics that are re-created during every scrape get exported. There are use cases where an individual series needs to be removed from a metric in the direct instrumentation case, those should be covered by Anyway, I think this should probably be closed with #1045 covering the documentation? It would definitely be nice to have more documentation for custom collectors. |
Sounds reasonable... but I think this also should be documented in some place, i.e. that But yeah, I think we can close this. |
Hey.
It would be nice if the client offered some (semi-automatic) handling of disappeared labelsets.
What do I mean by that?
Well I guess there are (at least) two classes of labelssets for a given metric:
number_of_HTTP_responses
with a labelname likestatus
and values for that like500
,200
,404
, then I'd say that these labels never really disappear. At most, they just no longer count up.physical_drive_medium_errors
with a labelname likedrive_name
and values for that likefoo
,bar
,baz
(something which uniquely identifies the drive in the system), then any value may disappear e.g. when the drive breaks and is replaced.So in this case, the labelset for the drive that is gone should no longer be exported (with a value that would never change again), but be removed.
Now there are further (at least) two ways of instrumentation:
a. One sets/increases the metrics right at the place where the objects are actually worked with - i.e. scattered throughout the code
b. One has a more or less central place in the code, where the values are gathered and set.
I'd say, often in practise it's actually (b). At least every time, where one cannot really integrate instrumentation natively but merely parses some data and transforms that into metrics.
Even
node_exporter
would also be (b), I'd say.How to get rid of disappeared labelsets?
Well there are of course the
clear()
andremove()
methods.The problem with the former is that it removes all. In (b) one might be tempted to say that this is not a problem, as one could simply do something like (pseudo-code):
clear_all_metrics()
would simply remove everything,set_all_metrics()
would re-set all that are still there.btw: I guess that approach is also problematic when not just printing the metrics output once to stdout or using
write_to_textfile()
- because when e.g. some webserver runs that continuously exports the metrics, that might be queried just afterclear_all_metrics()
but beforeset_all_metrics()
(or before that has finished).In (a), this wouldn't really work anyway, but even in (b) it's problematic as e.g. the
_created
timestamp metrics forCounters
would also get cleared&re-set every time.Also,
Counter.inc()
wouldn't work anymore - at least not directly, because one would somehow need to keep track of the previous value.With
clear()
therefore not really useful in practise (IMO),remove()
remains:The problem with
remove()
is IMO that one typically has no record of what to remove.Like when the drive from above goes away, an exporter does not get some active indication that's gone - it’s just no longer there in the parsed output.
So effectively, this forces one to keep track of all labelsets per metric that one had in the previous call to
set_all_metrics()
, compare that which was still left in the current call, and thenremove()
the ones gone one by one (thereby also keeping things like_created
forCounters
(that haven’t disappeared).Is there any recommendation on how to handle that scenario?
Or any means to handle this more out-of-the-box?
If not, what about a framework like this:
prometheus_client
tracks an additional value (maybe abool
or anint
- let's use anint
here, but haven't really thought it through whether that brings any benefits).int
is set to0
set()
,inc()
, etc.) for a given labelset+metric, thatint
is increased by1
clear_non_updated()
, which does whatclear()
does, but only for those labelsets+metrics, where theint
is still0
, i.e. those which haven't been updated.After it has cleared all these, it re-sets the
int
to0
for all remaining ones. Maybe that could be placed in a separate function (not really sure whether there'd be any use case for that).With that it should be possible to rewrite the main loop from above to something like:
It might make sense to do the actual implementation a bit different from the above.
Perhaps one can maintain the
int
that counts whether a labelset+metrics was updated at the registry, and also place theclear_non_updated()
at registry level.The idea is that one could then have different registries, such for labelset+metrics that should live on, even if not updated, and such for labelset+metrics which should then disappear.
Problem with that seems however to be that
start_http_server()
and similar accept only oneregistry
. So that probably doesn't work.Another way could be a parameter when the metric object is created, which tells whether cleanup via
clear_non_updated()
should be performed or not.Perhaps
cleanup_group=somestring
, and whenclear_non_updated()
is called with no suchcleanup_group
, it would simply clean up all (unless they'd been updated of course), but if one was given, it would clean up only those where the name matches (again, unless they'd been updated of course).Any ideas on that?
Cheers,
Chris.
The text was updated successfully, but these errors were encountered: