Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEP-5142: Pop pod from backoffQ when activeQ is empty #5144

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

macsko
Copy link
Member

@macsko macsko commented Feb 6, 2025

  • One-line PR description: Add KEP-5142
  • Other comments:

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Feb 6, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: macsko
Once this PR has been reviewed and has the lgtm label, please assign huang-wei, jpbetz for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Feb 6, 2025
@macsko macsko force-pushed the pop-Pod_from_backoffq_when-activeq_is_empty branch from 7fa0d54 to 34fc985 Compare February 6, 2025 16:32
@macsko
Copy link
Member Author

macsko commented Feb 6, 2025

/cc @dom4ha @sanposhiho @Huang-Wei @alculquicondor

I haven't finished all the points yet, but the general idea is there.

@macsko macsko force-pushed the pop-Pod_from_backoffq_when-activeq_is_empty branch from 34fc985 to 58bc648 Compare February 6, 2025 16:38
Copy link
Member

@sanposhiho sanposhiho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though there're some sections with TODO, looking good overall, aligned with the discussion we had in the issue.

@@ -0,0 +1,3 @@
kep-number: 5142
alpha:
approver: "@wojtek-t"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/cc @wojtek-t

@k8s-ci-robot k8s-ci-robot requested a review from wojtek-t February 7, 2025 00:07
Copy link
Member

@sanposhiho sanposhiho left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking mostly good for me, left only nits.

/cc @alculquicondor @macsko @Huang-Wei

approvers:
-

stage: alpha
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we start it from beta per the discussion at the issue? i.e., enable from the day1, but we have a feature gate for a safe guird. wdyt

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could do this.

@k8s-ci-robot
Copy link
Contributor

@sanposhiho: GitHub didn't allow me to request PR reviews from the following users: macsko.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

Looking mostly good for me, left only nits.

/cc @alculquicondor @macsko @Huang-Wei

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@sanposhiho
Copy link
Member

I meant @dom4ha

However, in the real world, if the scheduling latency is short enough, there won't be a visible downgrade in throughput.
This will only happen if there are no pods in activeQ, so this can be mitigated by an appropriate rate of pod creation.

#### Backoff won't be working as natural rate limiter in case of errors
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have to distinguish unschedulable from error cases in the first version, otherwise we risk the system overload due to scheduler sending retries storm.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, especially if we decide to start from beta directly, it will be implemented in the first version.

and the backoff time is calculated based on the number of scheduling failures that the pod has experienced.
If one pod has a smaller attempt counter than others,
could the scheduler keep popping this pod ahead of other pods because the pod's backoff expires faster than others?
Actually, that wouldn't happen because the scheduler would increment the attempt counter of pods from backoffQ as well,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assumed we can leave the logic of incrementing scheduling attempts only after the backoff time expires to avoid rapid increase of the backoff time for pods triggered by large number of events.

It's probably debatable, because the side effect is increasing the chance of pod starvation as described here (pod would get back with it's former backoff timeout, unless it expired).

The approach proposed here would make the backoff more natural, but would cause reporting way higher (in the order of magnitude) values for scheduling attempts.

To achieve the goal, activeQ's `pop()` method needs to be changed:
1. If activeQ is empty, then instead of waiting for a pod to arrive at activeQ, popping from backoffQ is tried.
2. If backoffQ is empty, then `pop()` is waiting for pod as previously.
3. If backoffQ is not empty, then the pod is processed like the pod would be taken from activeQ, including increasing attempts number.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

increasing attempts number.

It's debatable. It can cause rapid increase in the backoff time for some pods (when many event trigger retry), so I was thinking about computing the backoff based on the original approach to not change the logic too much. In other words, not increase scheduling attempts if all of them happened within the last backoff time window. This way this mechanism would be just a best effort improvement rather than a new approach.

Copy link
Member

@sanposhiho sanposhiho Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It can cause rapid increase in the backoff time for some pods (when many event trigger retry)

I believe we should increment the attempt number.
It is exactly an intentional behavior to move the pods to backoffQ immediately and make a lot of scheduling retries (regardless of whether with or without this feature), if many events are coming and each triggers the pod's requeueing.
As long as QHints determine the event might make the pod schedulable and then move this pod to backoffQ, this Pod should be able to get retried at any timing, regardless of with or without backoff time.
In case this tons of retries don't make the pod schedulable after all, and make the pod's backoff time much longer, that's the fault of QHint accuracy, not this feature.

Copy link
Member

@sanposhiho sanposhiho Feb 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only concern is the backoff time for pods getting error (not unschedulable status) because their scheduling failure isn't solved by events, but solved by some other factors (e.g., the networking issue etc).
In this case, it could happen: pod gets error status -> enqueued directly to backoffQ -> the scheduler pops this pod from backoffQ directly immediately (this feature) -> ... -> the attempt counter is incremented every time and the backoff time gets too long.
But, as the KEP mentioned, we'll address this point along with kubernetes/kubernetes#128748

@k8s-ci-robot
Copy link
Contributor

@macsko: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-enhancements-test 259cf07 link true /test pull-enhancements-test
pull-enhancements-verify 259cf07 link true /test pull-enhancements-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
Status: Needs Triage
Development

Successfully merging this pull request may close these issues.

4 participants