Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove scheduler waits to speed up recovery time #8200

Merged

Conversation

pierDipi
Copy link
Member

@pierDipi pierDipi commented Sep 23, 2024

Currently, the scheduler and autoscaler are single threads and use a lock to prevent multiple scheduling and autoscaling decision from happening in parallel; this is not a problem for our use cases, however, the multiple wait currently present are slowing down recovery time.

From my testing, if I delete and recreate the Kafka control plane and data plane (sort of simulates an upgrade), without this patch it takes hours to recover when there are 400 triggers or 20 minutes when there are 100 triggers; with the patch it is immediate (only a 2/3 minutes with 400 triggers).

  • Remove waits from state builder and autoscaler
  • Add additional debug logs
  • Use logger provided through the context as opposed to gloabal loggers in each individual component to preserve knative/pkg resource aware log keys.

Before with 200 triggers:

image

Before with 1024 triggers, we see a very high work queue depth for 3 hours (knative controller queue size)
image (3)


After with various number of triggers

image (1)

Work queue depth is not high for long periods
image

@knative-prow knative-prow bot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 23, 2024
Copy link

knative-prow bot commented Sep 23, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: pierDipi

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@knative-prow knative-prow bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 23, 2024
Copy link

codecov bot commented Sep 23, 2024

Codecov Report

Attention: Patch coverage is 66.40625% with 43 lines in your changes missing coverage. Please review.

Project coverage is 66.56%. Comparing base (e79f3b6) to head (03fd6ae).
Report is 4 commits behind head on main.

Files with missing lines Patch % Lines
pkg/scheduler/statefulset/autoscaler.go 48.83% 19 Missing and 3 partials ⚠️
pkg/scheduler/state/state.go 56.25% 9 Missing and 5 partials ⚠️
pkg/scheduler/statefulset/scheduler.go 89.58% 5 Missing ⚠️
pkg/scheduler/state/helpers.go 33.33% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #8200      +/-   ##
==========================================
- Coverage   67.47%   66.56%   -0.91%     
==========================================
  Files         371      371              
  Lines       18036    18271     +235     
==========================================
- Hits        12169    12162       -7     
- Misses       5088     5324     +236     
- Partials      779      785       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pierDipi pierDipi force-pushed the scheduler-remove-wait-better-logging branch 4 times, most recently from 0a65cf8 to a64478f Compare September 23, 2024 11:25
Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <[email protected]>
@pierDipi pierDipi force-pushed the scheduler-remove-wait-better-logging branch from a64478f to 03fd6ae Compare September 23, 2024 11:26
@pierDipi
Copy link
Member Author

/test unit-tests

Copy link
Member

@matzew matzew left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

I like the extra added logging here as well

@knative-prow knative-prow bot added the lgtm Indicates that a PR is ready to be merged. label Sep 23, 2024
@pierDipi
Copy link
Member Author

/test reconciler-tests

@pierDipi
Copy link
Member Author

/cherry-pick release-1.15

@knative-prow-robot
Copy link
Contributor

@pierDipi: once the present PR merges, I will cherry-pick it on top of release-1.15 in a new PR and assign it to you.

In response to this:

/cherry-pick release-1.15

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@pierDipi
Copy link
Member Author

/cherry-pick release-1.14

@knative-prow-robot
Copy link
Contributor

@pierDipi: once the present PR merges, I will cherry-pick it on top of release-1.14 in a new PR and assign it to you.

In response to this:

/cherry-pick release-1.14

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@knative-prow knative-prow bot merged commit 641cbb7 into knative:main Sep 23, 2024
33 of 36 checks passed
@knative-prow-robot
Copy link
Contributor

@pierDipi: #8200 failed to apply on top of branch "release-1.14":

Applying: Remove scheduler `wait`s to speed up recovery time
Using index info to reconstruct a base tree...
M	pkg/scheduler/scheduler.go
M	pkg/scheduler/scheduler_test.go
M	pkg/scheduler/state/helpers.go
M	pkg/scheduler/state/state.go
M	pkg/scheduler/state/state_test.go
M	pkg/scheduler/statefulset/autoscaler.go
M	pkg/scheduler/statefulset/autoscaler_test.go
M	pkg/scheduler/statefulset/scheduler.go
M	pkg/scheduler/statefulset/scheduler_test.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/scheduler/statefulset/scheduler_test.go
Auto-merging pkg/scheduler/statefulset/scheduler.go
CONFLICT (content): Merge conflict in pkg/scheduler/statefulset/scheduler.go
Auto-merging pkg/scheduler/statefulset/autoscaler_test.go
Auto-merging pkg/scheduler/statefulset/autoscaler.go
Auto-merging pkg/scheduler/state/state_test.go
Auto-merging pkg/scheduler/state/state.go
Auto-merging pkg/scheduler/state/helpers.go
Auto-merging pkg/scheduler/scheduler_test.go
Auto-merging pkg/scheduler/scheduler.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Remove scheduler `wait`s to speed up recovery time
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick release-1.14

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@knative-prow-robot
Copy link
Contributor

@pierDipi: #8200 failed to apply on top of branch "release-1.15":

Applying: Remove scheduler `wait`s to speed up recovery time
Using index info to reconstruct a base tree...
M	pkg/scheduler/statefulset/scheduler.go
M	pkg/scheduler/statefulset/scheduler_test.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/scheduler/statefulset/scheduler_test.go
Auto-merging pkg/scheduler/statefulset/scheduler.go
CONFLICT (content): Merge conflict in pkg/scheduler/statefulset/scheduler.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 Remove scheduler `wait`s to speed up recovery time
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/cherry-pick release-1.15

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

pierDipi added a commit to pierDipi/eventing that referenced this pull request Sep 23, 2024
Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <[email protected]>
pierDipi added a commit to pierDipi/eventing that referenced this pull request Sep 23, 2024
Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <[email protected]>
knative-prow bot pushed a commit that referenced this pull request Sep 23, 2024
…its to speed up recovery time (#8202)

* Improve scheduler memory usage (#8144)

* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <[email protected]>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <[email protected]>

---------

Signed-off-by: Pierangelo Di Pilato <[email protected]>

* Remove scheduler `wait`s to speed up recovery time (#8200)

Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <[email protected]>

---------

Signed-off-by: Pierangelo Di Pilato <[email protected]>
knative-prow bot pushed a commit that referenced this pull request Sep 23, 2024
…its to speed up recovery time (#8203)

* Improve scheduler memory usage (#8144)

* Improve scheduler memory usage

- Create a namespaced-scoped statefulset lister instead of being
  cluster-wide
- Accept a PodLister rather than creating a cluster-wide one

Signed-off-by: Pierangelo Di Pilato <[email protected]>

* Update codegen

Signed-off-by: Pierangelo Di Pilato <[email protected]>

---------

Signed-off-by: Pierangelo Di Pilato <[email protected]>

* Remove scheduler `wait`s to speed up recovery time (#8200)

Currently, the scheduler and autoscaler are single threads and use
a lock to prevent multiple scheduling and autoscaling decision
from happening in parallel; this is not a problem for our use
cases, however, the multiple `wait` currently present are slowing
down recovery time.

From my testing, if I delete and recreate the Kafka control plane
and data plane, without this patch it takes 1h to recover when there
are 400 triggers or 20 minutes when there are 100 triggers; with the
patch it is immediate (only a 2/3 minutes with 400 triggers).

- Remove `wait`s from state builder and autoscaler
- Add additional debug logs
- Use logger provided through the context as opposed to gloabal loggers
  in each individual component to preserve `knative/pkg` resource aware
  log keys.

Signed-off-by: Pierangelo Di Pilato <[email protected]>

---------

Signed-off-by: Pierangelo Di Pilato <[email protected]>
@pierDipi pierDipi deleted the scheduler-remove-wait-better-logging branch September 24, 2024 05:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants