-
Notifications
You must be signed in to change notification settings - Fork 956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: update do-not-disrupt description #6977
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change | ||||
---|---|---|---|---|---|---|
|
@@ -173,7 +173,7 @@ Karpenter will add the `Drifted` status condition on NodeClaims if the NodeClaim | |||||
Automated forceful methods will begin draining nodes as soon as the condition is met. Note that these methods blow past NodePool Disruption Budgets, and do not wait for a pre-spin replacement node to be healthy for the pods to reschedule, unlike the graceful methods mentioned above. Use Pod Disruption Budgets and `do-not-disrupt` on your nodes to rate-limit the speed at which your applications are disrupted. | ||||||
|
||||||
### Expiration | ||||||
Karpenter will disrupt nodes as soon as they're expired after they've lived for the duration of the NodePool's `spec.template.spec.expireAfter`. You can use expiration to periodically recycle nodes due to security concern. | ||||||
Karpenter will disrupt nodes as soon as they're expired after they've lived for the duration of the NodePool's `spec.template.spec.expireAfter`. You can use expiration to periodically recycle nodes due to security concern. | ||||||
|
||||||
### Interruption | ||||||
|
||||||
|
@@ -200,17 +200,31 @@ To enable interruption handling, configure the `--interruption-queue` CLI argume | |||||
|
||||||
## Controls | ||||||
|
||||||
### TerminationGracePeriod | ||||||
### TerminationGracePeriod | ||||||
|
||||||
You can set a NodePool's `terminationGracePeriod` through the `spec.template.spec.terminationGracePeriod` field. This field defines the duration of time that a node can be draining before it's forcibly deleted. A node begins draining when it's deleted. Pods will be deleted preemptively based on its TerminationGracePeriodSeconds before this terminationGracePeriod ends to give as much time to cleanup as possible. Note that if your pod's terminationGracePeriodSeconds is larger than this terminationGracePeriod, Karpenter may forcibly delete the pod before it has its full terminationGracePeriod to cleanup. | ||||||
You can set a NodePool's `terminationGracePeriod` through the [`spec.template.spec.terminationGracePeriod`]({{<ref "../concepts/nodepools/#spectemplatespecterminationgraceperiod" >}}) field. | ||||||
This is used to define the maximum drain duration for a given Node. | ||||||
A node begins draining once it has been deleted, and it will be forcibly terminated once the `terminationGracePeriod` has elapsed. | ||||||
A node may be terminated before the `terminationGracePeriod` has elapsed if all disruptable pods have been drained. | ||||||
|
||||||
This is especially useful in combination with `nodepool.spec.template.spec.expireAfter` to define an absolute maximum on the lifetime of a node, where a node is deleted at `expireAfter` and finishes draining within the `terminationGracePeriod` thereafter. Pods blocking eviction like PDBs and do-not-disrupt will block full draining until the `terminationGracePeriod` is reached. | ||||||
In conjunction with `expireAfter`, `terminationGracePeriod` can be used to enforce an absolute maximum node lifetime. | ||||||
The node will begin to drain once its `expireAfter` has elapsed, and it will be forcibly terminated once its `terminationGracePeriod` has elapsed, making the maximum node lifetime the sum of the two fields. | ||||||
|
||||||
For instance, a NodeClaim with `terminationGracePeriod` set to `1h` and an `expireAfter` set to `23h` will begin draining after it's lived for `23h`. Let's say a `do-not-disrupt` pod has `TerminationGracePeriodSeconds` set to `300` seconds. If the node hasn't been fully drained after `55m`, Karpenter will delete the pod to allow it's full `terminationGracePeriodSeconds` to cleanup. If no pods are blocking draining, Karpenter will cleanup the node as soon as the node is fully drained, rather than waiting for the NodeClaim's `terminationGracePeriod` to finish. | ||||||
{{% alert title="Warning" color="warning" %}} | ||||||
To ensure that the `terminationGracePeriodSeconds` value for draining pods is respected, pods will be preemptively deleted before the Node's `terminationGracePeriod` has elapsed. | ||||||
This includes pods with blocking [pod disruption budgets](https://kubernetes.io/docs/tasks/run-application/configure-pdb/) or a [`do-not-disrupt` annotation]({{<ref "#pod-level-controls" >}}). | ||||||
|
||||||
Consider the following example: a Node with a 1 hour `terminationGracePeriod` has been disrupted and begins to drain. | ||||||
A pod with a `do-not-disrupt` annotation and a 300 second (5 minute) `terminationGracePeriodsSeconds` is scheduled to it. | ||||||
If the pod is still running 55 minutes after the Node begins to drain, the pod will be deleted to ensure its `terminationGracePeriodSeconds` value is respected. | ||||||
|
||||||
If a pod's `terminationGracePeriodSeconds` value exceeds that of the Node it is scheduled to, Karpenter will prioritize the Node's `terminationGracePeriod`. | ||||||
The pod will be deleted as soon as the Node begins to drain, and it will not receive it's full `terminationGracePeriodSeconds`. | ||||||
{{% /alert %}} | ||||||
|
||||||
### NodePool Disruption Budgets | ||||||
|
||||||
You can rate limit Karpenter's disruption through the NodePool's `spec.disruption.budgets`. If undefined, Karpenter will default to one budget with `nodes: 10%`. Budgets will consider nodes that are actively being deleted for any reason, and will only block Karpenter from disrupting nodes voluntarily through drift, emptiness, and consolidation. Note that NodePool Disruption Budgets do not prevent Karpenter from cleaning up expired or drifted nodes. | ||||||
You can rate limit Karpenter's disruption through the NodePool's `spec.disruption.budgets`. If undefined, Karpenter will default to one budget with `nodes: 10%`. Budgets will consider nodes that are actively being deleted for any reason, and will only block Karpenter from disrupting nodes voluntarily through drift, emptiness, and consolidation. Note that NodePool Disruption Budgets do not prevent Karpenter from cleaning up expired or drifted nodes. | ||||||
|
||||||
#### Reasons | ||||||
Karpenter allows specifying if a budget applies to any of `Drifted`, `Underutilized`, or `Empty`. When a budget has no reasons, it's assumed that it applies to all reasons. When calculating allowed disruptions for a given reason, Karpenter will take the minimum of the budgets that have listed the reason or have left reasons undefined. | ||||||
|
@@ -223,7 +237,7 @@ If the budget is configured with a percentage value, such as `20%`, Karpenter wi | |||||
For example, the following NodePool with three budgets defines the following requirements: | ||||||
- The first budget will only allow 20% of nodes owned by that NodePool to be disrupted if it's empty or drifted. For instance, if there were 19 nodes owned by the NodePool, 4 empty or drifted nodes could be disrupted, rounding up from `19 * .2 = 3.8`. | ||||||
- The second budget acts as a ceiling to the previous budget, only allowing 5 disruptions when there are more than 25 nodes. | ||||||
- The last budget only blocks disruptions during the first 10 minutes of the day, where 0 disruptions are allowed, only applying to underutilized nodes. | ||||||
- The last budget only blocks disruptions during the first 10 minutes of the day, where 0 disruptions are allowed, only applying to underutilized nodes. | ||||||
|
||||||
```yaml | ||||||
apiVersion: karpenter.sh/v1 | ||||||
|
@@ -232,20 +246,20 @@ metadata: | |||||
name: default | ||||||
spec: | ||||||
template: | ||||||
spec: | ||||||
spec: | ||||||
expireAfter: 720h # 30 * 24h = 720h | ||||||
disruption: | ||||||
consolidationPolicy: WhenEmptyOrUnderutilized | ||||||
budgets: | ||||||
- nodes: "20%" | ||||||
reasons: | ||||||
reasons: | ||||||
- "Empty" | ||||||
- "Drifted" | ||||||
- nodes: "5" | ||||||
- nodes: "0" | ||||||
schedule: "@daily" | ||||||
duration: 10m | ||||||
reasons: | ||||||
reasons: | ||||||
- "Underutilized" | ||||||
``` | ||||||
|
||||||
|
@@ -274,8 +288,17 @@ Duration and Schedule must be defined together. When omitted, the budget is alwa | |||||
|
||||||
### Pod-Level Controls | ||||||
|
||||||
You can block Karpenter from voluntarily choosing to disrupt certain pods by setting the `karpenter.sh/do-not-disrupt: "true"` annotation on the pod. This is useful for pods that you want to run from start to finish without disruption. By opting pods out of this disruption, you are telling Karpenter that it should not voluntarily remove a node containing this pod. | ||||||
|
||||||
You can block Karpenter from voluntarily disrupting pods by adding the `karpenter.sh/do-not-disrupt: "true"` annotation on the pod. | ||||||
You can treat this annotation as a single-pod, permanently blocking PDB. | ||||||
This has the following consequences: | ||||||
- Nodes with `do-not-disrupt` pods will be excluded from **voluntary** disruption, i.e. [Consolidation]({{<ref "#consolidation" >}}) and [Drift]({{<ref "#drift" >}}). | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Doesn't voluntary disruption include drift? I'm not sure that they are considered different things here There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This just expands on what is voluntary disruption, it's not meant to differentiate between them. If you think that's unclear, I can try to reword. |
||||||
- Like pods with a fully-blocking PDB, pods with the `do-not-disrupt` annotation will **not** be gracefully evicted by the [Termination Controller]({{<ref "#termination-controller">}}). | ||||||
Karpenter will not be able to complete termination of the node until one of the following conditions is met: | ||||||
- All pods with `do-not-disrupt` annotations have been removed | ||||||
- All pods with `do-not-disrupt` annotations have entered a [terminal phase](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) (`Succeeded` or `Failed`) | ||||||
- The owning NodeClaim's [TerminationGracePeriod]({{<ref "#termination-grace-period" >}}) has elapsed | ||||||
|
||||||
This is useful for pods that you want to run from start to finish without disruption. | ||||||
Examples of pods that you might want to opt-out of disruption include an interactive game that you don't want to interrupt or a long batch job (such as you might have with machine learning) that would need to start over if it were interrupted. | ||||||
|
||||||
```yaml | ||||||
|
@@ -292,18 +315,14 @@ spec: | |||||
This annotation will be ignored for [terminating pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) and [terminal pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase) (Failed/Succeeded). | ||||||
{{% /alert %}} | ||||||
|
||||||
Examples of voluntary node removal that will be prevented by this annotation include: | ||||||
- [Consolidation]({{<ref "#consolidation" >}}) | ||||||
- [Drift]({{<ref "#drift" >}}) | ||||||
- Expiration | ||||||
|
||||||
{{% alert title="Note" color="primary" %}} | ||||||
Voluntary node removal does not include [Interruption]({{<ref "#interruption" >}}) or manual deletion initiated through `kubectl delete node`. Both of these are considered involuntary events, since node removal cannot be delayed. | ||||||
The `do-not-disrupt` annotation does **not** exclude nodes from involuntary disruption methods, i.e. [Expiration]({{<ref "#expiration" >}}), [Interruption]({{<ref "#interruption" >}}), and manual deletion (e.g. `kubectl delete node ...`). | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. To users who don't understand that involuntary disruption still goes through the termination controller might get confused on the language here.
Suggested change
|
||||||
{{% /alert %}} | ||||||
|
||||||
### Node-Level Controls | ||||||
|
||||||
You can block Karpenter from voluntarily choosing to disrupt certain nodes by setting the `karpenter.sh/do-not-disrupt: "true"` annotation on the node. This will prevent disruption actions on the node. | ||||||
You can block Karpenter from voluntarily choosing to disrupt certain nodes by setting the `karpenter.sh/do-not-disrupt: "true"` annotation on the node. | ||||||
This will prevent voluntary disruption actions against the node. | ||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Really only consolidation, not eventual. Worth thinking about how to make this distinction clearly and concisely. |
||||||
|
||||||
```yaml | ||||||
apiVersion: v1 | ||||||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -51,9 +51,13 @@ Below is the full changelog for v1, copied from the [v1 Migration Upgrade Proced | |
* API Rename: NodePool’s ConsolidationPolicy `WhenUnderutilized` is now renamed to `WhenEmptyOrUnderutilized` | ||
* Behavior Changes: | ||
* Expiration is now forceful and begins draining as soon as it’s expired. Karpenter does not wait for replacement capacity to be available before draining, but will start provisioning a replacement as soon as the node is expired and begins draining. | ||
* Pods with the `karpenter.sh/do-not-disrupt` annotation now block node termination. Termination of a node with these pods will be blocked until those pods are removed, enter a terminating or terminal state, or the NodeClaims's TerminationGracePeriod has expired. | ||
* Karpenter's generated NodeConfig now takes precedence when generating UserData with the AL2023 `amiFamily`. If you're setting any values managed by Karpenter in your AL2023 UserData, configure these through Karpenter natively (e.g. kubelet configuration fields). | ||
* Karpenter now adds a `karpenter.sh/unregistered:NoExecute` taint to nodes in injected UserData when using alias in AMISelectorTerms or non-Custom AMIFamily. When using `amiFamily: Custom`, users will need to add this taint into their UserData, where Karpenter will automatically remove it when provisioning nodes. | ||
* Discovered standard AL2023 AMIs will no longer be considered compatible with GPU / accelerator workloads. If you're using an AL2023 EC2NodeClass (without AMISelectorTerms) for these workloads, you will need to select your AMI via AMISelectorTerms (non-alias). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is now? |
||
* Karpenter now waits for underlying instances to be completely terminated before removing the associated nodes. This means it may take longer for nodes to be deleted and for nodeclaims to get cleaned up. | ||
* NodePools now have [status conditions]({{< relref "../concepts/nodepools/#statusconditions" >}}) that indicate if they are ready. If not, then they will not be considered during scheduling. | ||
* NodeClasses now have [status conditions]({{< relref "../concepts/nodeclasses/#statusconditions" >}}) that indicate if they are ready. If they are not ready, NodePools that reference them through their `nodeClassRef` will not be considered during scheduling. | ||
* API Moves: | ||
* ExpireAfter has moved from the `NodePool.Spec.Disruption` block to `NodePool.Spec.Template.Spec`, and is now a drift-able field. | ||
* `Kubelet` was moved to the EC2NodeClass from the NodePool. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is definitely clearer than it was previously, but consider this