-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(helm): update chart kube-prometheus-stack to 39.13.3 #707
Open
chii-bot
wants to merge
1
commit into
main
Choose a base branch
from
renovate/kube-prometheus-stack-39.x
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
chii-bot
bot
added
renovate/helm
type/patch
size/XS
Denotes a PR that changes 0-9 lines, ignoring generated files.
area/cluster
Changes made in the cluster directory
labels
Aug 5, 2022
Path: @@ -5689,7 +5689,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "sum(rate(grpc_server_started_total{job=\"$cluster\",grpc_type=\"unary\"}[5m]))",
+ "expr": "sum(rate(grpc_server_started_total{job=\"$cluster\",grpc_type=\"unary\"}[$__rate_interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "RPC Rate",
@@ -5698,7 +5698,7 @@
"step": 2
},
{
- "expr": "sum(rate(grpc_server_handled_total{job=\"$cluster\",grpc_type=\"unary\",grpc_code!=\"OK\"}[5m]))",
+ "expr": "sum(rate(grpc_server_handled_total{job=\"$cluster\",grpc_type=\"unary\",grpc_code=~\"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded\"}[$__rate_interval]))",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "RPC Failed Rate",
@@ -5945,7 +5945,7 @@
"steppedLine": true,
"targets": [
{
- "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=\"$cluster\"}[5m])) by (instance, le))",
+ "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=\"$cluster\"}[$__rate_interval])) by (instance, le))",
"hide": false,
"intervalFactor": 2,
"legendFormat": "{{instance}} WAL fsync",
@@ -5954,7 +5954,7 @@
"step": 4
},
{
- "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket{job=\"$cluster\"}[5m])) by (instance, le))",
+ "expr": "histogram_quantile(0.99, sum(rate(etcd_disk_backend_commit_duration_seconds_bucket{job=\"$cluster\"}[$__rate_interval])) by (instance, le))",
"intervalFactor": 2,
"legendFormat": "{{instance}} DB fsync",
"metric": "etcd_disk_backend_commit_duration_seconds_bucket",
@@ -6112,7 +6112,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "rate(etcd_network_client_grpc_received_bytes_total{job=\"$cluster\"}[5m])",
+ "expr": "rate(etcd_network_client_grpc_received_bytes_total{job=\"$cluster\"}[$__rate_interval])",
"intervalFactor": 2,
"legendFormat": "{{instance}} Client Traffic In",
"metric": "etcd_network_client_grpc_received_bytes_total",
@@ -6188,7 +6188,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "rate(etcd_network_client_grpc_sent_bytes_total{job=\"$cluster\"}[5m])",
+ "expr": "rate(etcd_network_client_grpc_sent_bytes_total{job=\"$cluster\"}[$__rate_interval])",
"intervalFactor": 2,
"legendFormat": "{{instance}} Client Traffic Out",
"metric": "etcd_network_client_grpc_sent_bytes_total",
@@ -6264,7 +6264,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "sum(rate(etcd_network_peer_received_bytes_total{job=\"$cluster\"}[5m])) by (instance)",
+ "expr": "sum(rate(etcd_network_peer_received_bytes_total{job=\"$cluster\"}[$__rate_interval])) by (instance)",
"intervalFactor": 2,
"legendFormat": "{{instance}} Peer Traffic In",
"metric": "etcd_network_peer_received_bytes_total",
@@ -6341,7 +6341,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "sum(rate(etcd_network_peer_sent_bytes_total{job=\"$cluster\"}[5m])) by (instance)",
+ "expr": "sum(rate(etcd_network_peer_sent_bytes_total{job=\"$cluster\"}[$__rate_interval])) by (instance)",
"hide": false,
"interval": "",
"intervalFactor": 2,
@@ -6425,7 +6425,7 @@
"steppedLine": false,
"targets": [
{
- "expr": "sum(rate(etcd_server_proposals_failed_total{job=\"$cluster\"}[5m]))",
+ "expr": "sum(rate(etcd_server_proposals_failed_total{job=\"$cluster\"}[$__rate_interval]))",
"intervalFactor": 2,
"legendFormat": "Proposal Failure Rate",
"metric": "etcd_server_proposals_failed_total",
@@ -6441,7 +6441,7 @@
"step": 2
},
{
- "expr": "sum(rate(etcd_server_proposals_committed_total{job=\"$cluster\"}[5m]))",
+ "expr": "sum(rate(etcd_server_proposals_committed_total{job=\"$cluster\"}[$__rate_interval]))",
"intervalFactor": 2,
"legendFormat": "Proposal Commit Rate",
"metric": "etcd_server_proposals_committed_total",
@@ -6449,7 +6449,7 @@
"step": 2
},
{
- "expr": "sum(rate(etcd_server_proposals_applied_total{job=\"$cluster\"}[5m]))",
+ "expr": "sum(rate(etcd_server_proposals_applied_total{job=\"$cluster\"}[$__rate_interval]))",
"intervalFactor": 2,
"legendFormat": "Proposal Apply Rate",
"refId": "D",
@@ -6570,6 +6570,115 @@
"show": true
}
]
+ },
+ {
+ "aliasColors": {},
+ "bars": false,
+ "dashLength": 10,
+ "dashes": false,
+ "datasource": "$datasource",
+ "decimals": 0,
+ "editable": true,
+ "error": false,
+ "fieldConfig": {
+ "defaults": {
+ "custom": {}
+ },
+ "overrides": []
+ },
+ "fill": 0,
+ "fillGradient": 0,
+ "gridPos": {
+ "h": 7,
+ "w": 12,
+ "x": 0,
+ "y": 28
+ },
+ "hiddenSeries": false,
+ "id": 42,
+ "isNew": true,
+ "legend": {
+ "alignAsTable": false,
+ "avg": false,
+ "current": false,
+ "max": false,
+ "min": false,
+ "rightSide": false,
+ "show": false,
+ "total": false,
+ "values": false
+ },
+ "lines": true,
+ "linewidth": 2,
+ "links": [],
+ "nullPointMode": "connected",
+ "options": {
+ "alertThreshold": true
+ },
+ "percentage": false,
+ "pluginVersion": "7.4.3",
+ "pointradius": 5,
+ "points": false,
+ "renderer": "flot",
+ "seriesOverrides": [],
+ "spaceLength": 10,
+ "stack": false,
+ "steppedLine": false,
+ "targets": [
+ {
+ "expr": "histogram_quantile(0.99, sum by (instance, le) (rate(etcd_network_peer_round_trip_time_seconds_bucket{job=\"$cluster\"}[$__rate_interval])))",
+ "interval": "",
+ "intervalFactor": 2,
+ "legendFormat": "{{instance}} Peer round trip time",
+ "metric": "etcd_network_peer_round_trip_time_seconds_bucket",
+ "refId": "A",
+ "step": 2
+ }
+ ],
+ "thresholds": [],
+ "timeFrom": null,
+ "timeRegions": [],
+ "timeShift": null,
+ "title": "Peer round trip time",
+ "tooltip": {
+ "msResolution": false,
+ "shared": true,
+ "sort": 0,
+ "value_type": "individual"
+ },
+ "type": "graph",
+ "xaxis": {
+ "buckets": null,
+ "mode": "time",
+ "name": null,
+ "show": true,
+ "values": []
+ },
+ "yaxes": [
+ {
+ "$$hashKey": "object:925",
+ "decimals": null,
+ "format": "s",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ },
+ {
+ "$$hashKey": "object:926",
+ "format": "short",
+ "label": null,
+ "logBase": 1,
+ "max": null,
+ "min": null,
+ "show": true
+ }
+ ],
+ "yaxis": {
+ "align": false,
+ "alignLevel": null
+ }
}
],
"title": "New row"
@@ -6578,7 +6687,9 @@
"schemaVersion": 13,
"sharedCrosshair": false,
"style": "dark",
- "tags": [],
+ "tags": [
+ "etcd-mixin"
+ ],
"templating": {
"list": [
{
@@ -6587,7 +6698,7 @@
"value": "Prometheus"
},
"hide": 0,
- "label": null,
+ "label": "Data Source",
"name": "datasource",
"options": [],
"query": "prometheus",
@@ -6609,7 +6720,7 @@
"name": "cluster",
"options": [],
"query": "label_values(etcd_server_has_leader, job)",
- "refresh": 1,
+ "refresh": 2,
"regex": "",
"sort": 2,
"tagValuesQuery": "",
@@ -38691,6 +38802,7 @@
namespace: default
name: kube-prometheus-stack-operator
path: /admission-prometheusrules/mutate
+ timeoutSeconds: 10
admissionReviewVersions: ["v1", "v1beta1"]
sideEffects: None
---
@@ -38958,61 +39070,84 @@
groups:
- name: etcd
rules:
+ - alert: etcdMembersDown
+ annotations:
+ description: 'etcd cluster "{{ $labels.job }}": members are down ({{ $value }}).'
+ summary: etcd cluster members are down.
+ expr: |-
+ max without (endpoint) (
+ sum without (instance) (up{job=~".*etcd.*"} == bool 0)
+ or
+ count without (To) (
+ sum without (instance) (rate(etcd_network_peer_sent_failures_total{job=~".*etcd.*"}[120s])) > 0.01
+ )
+ )
+ > 0
+ for: 10m
+ labels:
+ severity: critical
- alert: etcdInsufficientMembers
annotations:
- message: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value }}).'
- expr: sum(up{job=~".*etcd.*"} == bool 1) by (job) < ((count(up{job=~".*etcd.*"}) by (job) + 1) / 2)
+ description: 'etcd cluster "{{ $labels.job }}": insufficient members ({{ $value }}).'
+ summary: etcd cluster has insufficient number of members.
+ expr: sum(up{job=~".*etcd.*"} == bool 1) without (instance) < ((count(up{job=~".*etcd.*"}) without (instance) + 1) / 2)
for: 3m
labels:
severity: critical
- alert: etcdNoLeader
annotations:
- message: 'etcd cluster "{{ $labels.job }}": member {{ $labels.instance }} has no leader.'
+ description: 'etcd cluster "{{ $labels.job }}": member {{ $labels.instance }} has no leader.'
+ summary: etcd cluster has no leader.
expr: etcd_server_has_leader{job=~".*etcd.*"} == 0
for: 1m
labels:
severity: critical
- alert: etcdHighNumberOfLeaderChanges
annotations:
- message: 'etcd cluster "{{ $labels.job }}": instance {{ $labels.instance }} has seen {{ $value }} leader changes within the last hour.'
- expr: rate(etcd_server_leader_changes_seen_total{job=~".*etcd.*"}[15m]) > 3
- for: 15m
+ description: 'etcd cluster "{{ $labels.job }}": {{ $value }} leader changes within the last 15 minutes. Frequent elections may be a sign of insufficient resources, high network latency, or disruptions by other components and should be investigated.'
+ summary: etcd cluster has high number of leader changes.
+ expr: increase((max without (instance) (etcd_server_leader_changes_seen_total{job=~".*etcd.*"}) or 0*absent(etcd_server_leader_changes_seen_total{job=~".*etcd.*"}))[15m:1m]) >= 4
+ for: 5m
labels:
severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations:
- message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster has high number of failed grpc requests.
expr: |-
- 100 LICENSE README.md Taskfile.yml cluster default docs hack mkdocs.yml scripts talos sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
+ 100 LICENSE README.md Taskfile.yml cluster default docs hack mkdocs.yml scripts talos sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code=~"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded"}[5m])) without (grpc_type, grpc_code)
/
- sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
+ sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) without (grpc_type, grpc_code)
> 1
for: 10m
labels:
severity: warning
- alert: etcdHighNumberOfFailedGRPCRequests
annotations:
- message: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": {{ $value }}% of requests for {{ $labels.grpc_method }} failed on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster has high number of failed grpc requests.
expr: |-
- 100 LICENSE README.md Taskfile.yml cluster default docs hack mkdocs.yml scripts talos sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code!="OK"}[5m])) BY (job, instance, grpc_service, grpc_method)
+ 100 LICENSE README.md Taskfile.yml cluster default docs hack mkdocs.yml scripts talos sum(rate(grpc_server_handled_total{job=~".*etcd.*", grpc_code=~"Unknown|FailedPrecondition|ResourceExhausted|Internal|Unavailable|DataLoss|DeadlineExceeded"}[5m])) without (grpc_type, grpc_code)
/
- sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) BY (job, instance, grpc_service, grpc_method)
+ sum(rate(grpc_server_handled_total{job=~".*etcd.*"}[5m])) without (grpc_type, grpc_code)
> 5
for: 5m
labels:
severity: critical
- alert: etcdGRPCRequestsSlow
annotations:
- message: 'etcd cluster "{{ $labels.job }}": gRPC requests to {{ $labels.grpc_method }} are taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": 99th percentile of gRPC requests is {{ $value }}s on etcd instance {{ $labels.instance }} for {{ $labels.grpc_method }} method.'
+ summary: etcd grpc requests are slow
expr: |-
- histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_type="unary"}[5m])) by (job, instance, grpc_service, grpc_method, le))
+ histogram_quantile(0.99, sum(rate(grpc_server_handling_seconds_bucket{job=~".*etcd.*", grpc_method!="Defragment", grpc_type="unary"}[5m])) without(grpc_type))
> 0.15
for: 10m
labels:
severity: critical
- alert: etcdMemberCommunicationSlow
annotations:
- message: 'etcd cluster "{{ $labels.job }}": member communication with {{ $labels.To }} is taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": member communication with {{ $labels.To }} is taking {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster member communication is slow.
expr: |-
histogram_quantile(0.99, rate(etcd_network_peer_round_trip_time_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.15
@@ -39021,53 +39156,64 @@
severity: warning
- alert: etcdHighNumberOfFailedProposals
annotations:
- message: 'etcd cluster "{{ $labels.job }}": {{ $value }} proposal failures within the last hour on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": {{ $value }} proposal failures within the last 30 minutes on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster has high number of proposal failures.
expr: rate(etcd_server_proposals_failed_total{job=~".*etcd.*"}[15m]) > 5
for: 15m
labels:
severity: warning
- alert: etcdHighFsyncDurations
annotations:
- message: 'etcd cluster "{{ $labels.job }}": 99th percentile fync durations are {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": 99th percentile fsync durations are {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster 99th percentile fsync durations are too high.
expr: |-
histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.5
for: 10m
labels:
severity: warning
+ - alert: etcdHighFsyncDurations
+ annotations:
+ description: 'etcd cluster "{{ $labels.job }}": 99th percentile fsync durations are {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster 99th percentile fsync durations are too high.
+ expr: |-
+ histogram_quantile(0.99, rate(etcd_disk_wal_fsync_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
+ > 1
+ for: 10m
+ labels:
+ severity: critical
- alert: etcdHighCommitDurations
annotations:
- message: 'etcd cluster "{{ $labels.job }}": 99th percentile commit durations {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ description: 'etcd cluster "{{ $labels.job }}": 99th percentile commit durations {{ $value }}s on etcd instance {{ $labels.instance }}.'
+ summary: etcd cluster 99th percentile commit durations are too high.
expr: |-
histogram_quantile(0.99, rate(etcd_disk_backend_commit_duration_seconds_bucket{job=~".*etcd.*"}[5m]))
> 0.25
for: 10m
labels:
severity: warning
- - alert: etcdHighNumberOfFailedHTTPRequests
+ - alert: etcdDatabaseQuotaLowSpace
annotations:
- message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}'
- expr: |-
- sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
- BY (method) > 0.01
+ description: 'etcd cluster "{{ $labels.job }}": database size exceeds the defined quota on etcd instance {{ $labels.instance }}, please defrag or increase the quota as the writes to etcd will be disabled when it is full.'
+ summary: etcd cluster database is running full.
+ expr: (last_over_time(etcd_mvcc_db_total_size_in_bytes[5m]) / last_over_time(etcd_server_quota_backend_bytes[5m]))*100 > 95
for: 10m
labels:
- severity: warning
- - alert: etcdHighNumberOfFailedHTTPRequests
+ severity: critical
+ - alert: etcdExcessiveDatabaseGrowth
annotations:
- message: '{{ $value }}% of requests for {{ $labels.method }} failed on etcd instance {{ $labels.instance }}.'
- expr: |-
- sum(rate(etcd_http_failed_total{job=~".*etcd.*", code!="404"}[5m])) BY (method) / sum(rate(etcd_http_received_total{job=~".*etcd.*"}[5m]))
- BY (method) > 0.05
+ description: 'etcd cluster "{{ $labels.job }}": Predicting running out of disk space in the next four hours, based on write observations within the past four hours on etcd instance {{ $labels.instance }}, please check as it might be disruptive.'
+ summary: etcd cluster database growing very fast.
+ expr: predict_linear(etcd_mvcc_db_total_size_in_bytes[4h], 4*60*60) > etcd_server_quota_backend_bytes
for: 10m
labels:
- severity: critical
- - alert: etcdHTTPRequestsSlow
+ severity: warning
+ - alert: etcdDatabaseHighFragmentationRatio
annotations:
- message: etcd instance {{ $labels.instance }} HTTP requests to {{ $labels.method }} are slow.
- expr: |-
- histogram_quantile(0.99, rate(etcd_http_successful_duration_seconds_bucket[5m]))
- > 0.15
+ description: 'etcd cluster "{{ $labels.job }}": database size in use on instance {{ $labels.instance }} is {{ $value | humanizePercentage }} of the actual allocated disk space, please run defragmentation (e.g. etcdctl defrag) to retrieve the unused fragmented disk space.'
+ runbook_url: https://etcd.io/docs/v3.5/op-guide/maintenance/#defragmentation
+ summary: etcd database size in use is less than 50% of the actual allocated storage.
+ expr: (last_over_time(etcd_mvcc_db_total_size_in_use_in_bytes[5m]) / last_over_time(etcd_mvcc_db_total_size_in_bytes[5m])) < 0.5
for: 10m
labels:
severity: warning
@@ -42154,6 +42300,7 @@
- --host=kube-prometheus-stack-operator,kube-prometheus-stack-operator.default.svc
- --namespace=default
- --secret-name=kube-prometheus-stack-admission
+ securityContext: {}
resources: {}
restartPolicy: OnFailure
serviceAccountName: kube-prometheus-stack-admission
@@ -42200,6 +42347,7 @@
- --namespace=default
- --secret-name=kube-prometheus-stack-admission
- --patch-failure-policy=Fail
+ securityContext: {}
resources: {}
restartPolicy: OnFailure
serviceAccountName: kube-prometheus-stack-admission |
MegaLinter status: ❌ ERROR
See errors details in artifact MegaLinter reports on CI Job page |
chii-bot
bot
changed the title
fix(helm): update chart kube-prometheus-stack to 39.4.1
feat(helm): update chart kube-prometheus-stack to 39.5.0
Aug 7, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
August 7, 2022 08:20
f79e3e7
to
61a7bfb
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.5.0
feat(helm): update chart kube-prometheus-stack to 39.6.0
Aug 10, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
2 times, most recently
from
August 16, 2022 08:19
7414adb
to
41b9779
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.6.0
feat(helm): update chart kube-prometheus-stack to 39.7.0
Aug 16, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
August 17, 2022 12:38
41b9779
to
23f014e
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.7.0
feat(helm): update chart kube-prometheus-stack to 39.8.0
Aug 17, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
August 21, 2022 07:17
23f014e
to
f5605f4
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.8.0
feat(helm): update chart kube-prometheus-stack to 39.9.0
Aug 21, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
August 31, 2022 06:26
f5605f4
to
419f9f0
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.9.0
feat(helm): update chart kube-prometheus-stack to 39.10.0
Aug 31, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
August 31, 2022 17:23
419f9f0
to
4b1cfd9
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.10.0
feat(helm): update chart kube-prometheus-stack to 39.11.0
Aug 31, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
September 10, 2022 14:20
4b1cfd9
to
3dd0291
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.11.0
feat(helm): update chart kube-prometheus-stack to 39.12.0
Sep 10, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
September 11, 2022 14:17
3dd0291
to
837cdfd
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.12.0
feat(helm): update chart kube-prometheus-stack to 39.12.1
Sep 11, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
September 13, 2022 10:24
837cdfd
to
78af3cf
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.12.1
feat(helm): update chart kube-prometheus-stack to 39.13.0
Sep 13, 2022
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
September 13, 2022 14:26
78af3cf
to
d2b4943
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.13.0
feat(helm): update chart kube-prometheus-stack to 39.13.1
Sep 13, 2022
| datasource | package | from | to | | ---------- | --------------------- | ------ | ------- | | helm | kube-prometheus-stack | 39.4.0 | 39.13.3 | | helm | kube-prometheus-stack | 39.4.0 | 39.13.3 | | helm | kube-prometheus-stack | 39.4.0 | 39.13.3 |
chii-bot
bot
force-pushed
the
renovate/kube-prometheus-stack-39.x
branch
from
September 13, 2022 16:27
d2b4943
to
53c27ab
Compare
chii-bot
bot
changed the title
feat(helm): update chart kube-prometheus-stack to 39.13.1
feat(helm): update chart kube-prometheus-stack to 39.13.3
Sep 13, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/cluster
Changes made in the cluster directory
renovate/helm
size/XS
Denotes a PR that changes 0-9 lines, ignoring generated files.
type/patch
0 participants
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
39.4.0
->39.13.3
⚠ Dependency Lookup Warnings ⚠
Warnings were logged while processing this repo. Please check the Dependency Dashboard for more information.
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
This PR has been generated by Renovate Bot.