Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix broken rules with Prometheus v3 #1006

Conversation

leonnicolas
Copy link

With Prometheus v3 histograms are normalized and this breaks rules that
select on the le label.
This PR updates the label selectors to use the new normalized value.

Before:

{le="1"}

Now with Prometheus v3:

{le="1.0"}

Fixes issue: #1005

Signed-off-by: leonnicolas [email protected]

With Prometheus v3 [histograms are normalized](https://prometheus.io/docs/prometheus/latest/migration/#le-and-quantile-label-values) and this breaks rules that
select on the `le` label.
This PR updates the label selectors to use the new normalized value.

Before:
```
{le="1"}
```
Now with Prometheus v3:
```
{le="1.0"}
```

Fixes issue: kubernetes-monitoring#1005

Signed-off-by: leonnicolas <[email protected]>
@povilasv
Copy link
Contributor

Does it / will it work Prometheus 2?

@leonnicolas
Copy link
Author

Does it / will it work Prometheus 2?

No, If we want to support both, I guess we could do something like

metrics{ls~="1|1.0"}

Would be fine with me. I have no idea how much this could impact load on Prometheus

@povilasv
Copy link
Contributor

I guess we need to do both then, would be good to test load. Or maybe some feature flag?

@leonnicolas
Copy link
Author

How would a feature flag work/look like? This gave me the idea to fix this downstream in kube-prometheus with this PR.

@povilasv
Copy link
Contributor

🤔

  _config+:: {
    prometheusV3: true|false',
  },

But probably best to support both options and at some point deprecate v2, once it's been like a year or so of stable v3

@skl WDYT?

@skl
Copy link
Collaborator

skl commented Dec 30, 2024

@povilasv @leonnicolas I was going to reply here but thought it would be easier to create a PR instead:

TL;DR

  • No performance issue observed using regex matcher on these rules
  • Suggest we go the regex route to support both v2/v3 prometheus as well as the issues reported in Fix: Handle float apiserver buckets #965

Let me know what you think.

@skl skl closed this Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants