-
Notifications
You must be signed in to change notification settings - Fork 54
[Question] Why sometimes the node-termination is not able to delete all the pods #38
Comments
Hi, Tried to test it, see output logs but no luck at all. Thanks for an idea |
I did some testing, it looks like it does the job, but only if there is less than 11 pods on a node. If so, it removes all of them, if not, it stucks, processes just a few of the pods and ends suddenly, no logs further. The rest of the pods is running till the node hardware shutdown. So it takes a lot of time to handle these by k8s and reschedule. |
Hi, Facing the same issue. I see from google docs that pre-empted node gets 30 seconds before it gets deleted. |
I follow the GCP article and applied the recommendations , including the daemonset that creates a systemd service that blocks the shutdown of the Kubelet process. I also delegate to an external service in another pod in another namespace to execute the deletion of all pods outside the machine that is being deleted/preempted. With this solution the deletion of pods is always done outside the proper node. But with no success. |
I am wathing these events from kubernetes when node-termination tries to delete the pods
Do you know what this means? |
Hi,
I got preemptible nodes with more than 40 pods.
For some reason is not able to delete all the pods. It starts and when it has deleted around 20 pods, it stops. No logs further this moment.
I tried to delete the pods at the same time that listing the pods
eviction.go:66
is taking place , but no success either.
Thanks for your help
The text was updated successfully, but these errors were encountered: