-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do tail sampling to drop health probes? #443
Comments
Hello François, I agree with you about moving to tail sampling to achieve your goal, instead of filtering. Are you trying to get to something that looks like this? processors:
tail_sampling:
decision_wait: 10s
num_traces: 100
expected_new_traces_per_sec: 10
policies:
[
{
name: drop-probes-policy,
type: string_attribute,
string_attribute: {
key: http.route,
values: [\/livez, \/readyz, \/health],
enabled_regex_matching: true,
invert_match: true
}
}
] |
Please note we have another helm chart that you can use to include a tail sampling architecture: https://github.com/grafana/helm-charts/tree/main/charts/grafana-sampling This chart deploys a layered set of agents to load balance and tail sample as described here: https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md#scaling-collectors-with-the-tail-sampling-processor This can be used in tandem with the k8s monitoring helm charts by exporting your traces to the load balancing layer of the sampling chart. This will give you the ability to scale the tail sampling architecture separately from the rest of your telemetry collection. |
Interesting @rlankfo, Seems more complicated than just adding a stage to the config though :) But good to know for people that really needs tail sampling as it is a good work around |
@Lp-Francois Is there any reason you can't use the filter processor to drop health check spans for instance? Adding the tail sampler for this purpose seems a bit heavy handed. I can leave another comment on the other PR but I'd be concerned about introducing this component into the k8s monitoring helm chart because it won't be scalable by default, which is why we have the additional helm chart that can be chained. |
Hey @rlankfo, I actually tried the filter processor first (here), but got a lot of rootless spans, which was annoying and hard to get rid of without maybe dropping rootless spans not related to this healthchecks. I was looking at a simple solution where updating a values.yaml would solve my problem, without going through chaining different otel/agent helm charts |
@Lp-Francois thanks, I didn't see your other comment there. I responded there too. I agree the rootless spans are annoying but I'd assume you can drop the remaining spans from the health check traces explicitly? |
Hello,
I tried in a previous issue to find a solution to my current problem: how to drop the traces coming from Kubernetes probes (readiness, liveness)? Sadly, using the span filter creates inconsistent traces, missing the root span as it drops the root.
Is there a way of doing tail sampling with this helm chart, in order to drop the full trace containing an health probe? (/health, /live or /ready)?
Or how would you recommend to proceed to drop these traces?
The text was updated successfully, but these errors were encountered: