-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-5142: Pop pod from backoffQ when activeQ is empty #5144
base: master
Are you sure you want to change the base?
KEP-5142: Pop pod from backoffQ when activeQ is empty #5144
Conversation
macsko
commented
Feb 6, 2025
- One-line PR description: Add KEP-5142
- Issue link: Pop pod from backoffQ when activeQ is empty #5142
- Other comments:
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: macsko The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
7fa0d54
to
34fc985
Compare
/cc @dom4ha @sanposhiho @Huang-Wei @alculquicondor I haven't finished all the points yet, but the general idea is there. |
34fc985
to
58bc648
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though there're some sections with TODO, looking good overall, aligned with the discussion we had in the issue.
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
@@ -0,0 +1,3 @@ | |||
kep-number: 5142 | |||
alpha: | |||
approver: "@wojtek-t" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @wojtek-t
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking mostly good for me, left only nits.
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
approvers: | ||
- | ||
|
||
stage: alpha |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we start it from beta per the discussion at the issue? i.e., enable from the day1, but we have a feature gate for a safe guird. wdyt
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could do this.
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
@sanposhiho: GitHub didn't allow me to request PR reviews from the following users: macsko. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I meant @dom4ha |
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
keps/sig-scheduling/5142-pop-backoffq-when-activeq-empty/README.md
Outdated
Show resolved
Hide resolved
However, in the real world, if the scheduling latency is short enough, there won't be a visible downgrade in throughput. | ||
This will only happen if there are no pods in activeQ, so this can be mitigated by an appropriate rate of pod creation. | ||
|
||
#### Backoff won't be working as natural rate limiter in case of errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have to distinguish unschedulable from error cases in the first version, otherwise we risk the system overload due to scheduler sending retries storm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, especially if we decide to start from beta directly, it will be implemented in the first version.
and the backoff time is calculated based on the number of scheduling failures that the pod has experienced. | ||
If one pod has a smaller attempt counter than others, | ||
could the scheduler keep popping this pod ahead of other pods because the pod's backoff expires faster than others? | ||
Actually, that wouldn't happen because the scheduler would increment the attempt counter of pods from backoffQ as well, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assumed we can leave the logic of incrementing scheduling attempts only after the backoff time expires to avoid rapid increase of the backoff time for pods triggered by large number of events.
It's probably debatable, because the side effect is increasing the chance of pod starvation as described here (pod would get back with it's former backoff timeout, unless it expired).
The approach proposed here would make the backoff more natural, but would cause reporting way higher (in the order of magnitude) values for scheduling attempts.
To achieve the goal, activeQ's `pop()` method needs to be changed: | ||
1. If activeQ is empty, then instead of waiting for a pod to arrive at activeQ, popping from backoffQ is tried. | ||
2. If backoffQ is empty, then `pop()` is waiting for pod as previously. | ||
3. If backoffQ is not empty, then the pod is processed like the pod would be taken from activeQ, including increasing attempts number. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
increasing attempts number.
It's debatable. It can cause rapid increase in the backoff time for some pods (when many event trigger retry), so I was thinking about computing the backoff based on the original approach to not change the logic too much. In other words, not increase scheduling attempts if all of them happened within the last backoff time window. This way this mechanism would be just a best effort improvement rather than a new approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can cause rapid increase in the backoff time for some pods (when many event trigger retry)
I believe we should increment the attempt number.
It is exactly an intentional behavior to move the pods to backoffQ immediately and make a lot of scheduling retries (regardless of whether with or without this feature), if many events are coming and each triggers the pod's requeueing.
As long as QHints determine the event might make the pod schedulable and then move this pod to backoffQ, this Pod should be able to get retried at any timing, regardless of with or without backoff time.
In case this tons of retries don't make the pod schedulable after all, and make the pod's backoff time much longer, that's the fault of QHint accuracy, not this feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only concern is the backoff time for pods getting error (not unschedulable status) because their scheduling failure isn't solved by events, but solved by some other factors (e.g., the networking issue etc).
In this case, it could happen: pod gets error status -> enqueued directly to backoffQ -> the scheduler pops this pod from backoffQ directly immediately (this feature) -> ... -> the attempt counter is incremented every time and the backoff time gets too long.
But, as the KEP mentioned, we'll address this point along with kubernetes/kubernetes#128748
@macsko: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |