Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Telegraf helm charts PodDisruptionBudgets don't make sense #623

Open
cvalaas opened this issue Jan 11, 2024 · 1 comment
Open

Telegraf helm charts PodDisruptionBudgets don't make sense #623

cvalaas opened this issue Jan 11, 2024 · 1 comment

Comments

@cvalaas
Copy link

cvalaas commented Jan 11, 2024

By default, the telegraf deployments have 1 replica and a PDB of MinAvailable: 1, this makes the pods unevictable and scaling down kubernetes nodes not possible.
I would suggest changing pdb: create: to false as the default in helm-charts/charts/telegraf/values.yaml

See here:

If you set maxUnavailable to 0% or 0, or you set minAvailable to 100% or the number of replicas, you are requiring zero voluntary evictions. When you set zero voluntary evictions for a workload object such as ReplicaSet, then you cannot successfully drain a Node running one of those Pods. If you try to drain a Node where an unevictable Pod is running, the drain never completes. This is permitted as per the semantics of PodDisruptionBudget.

https://kubernetes.io/docs/tasks/run-application/configure-pdb/

@broomfn
Copy link

broomfn commented Feb 28, 2024

+1 This has just caused me issues upgrading my Kubernetes node pool version, it's preventing upgrades. Obviously this is a serious security issue if the nodes cannot be upgraded.

As a work around I've had to manually delete the PDB which also isn't great for uptime reliability

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants