-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ feat: implement initial benchmark tests #1071
Conversation
✅ Deploy Preview for olmv1 ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #1071 +/- ##
==========================================
- Coverage 75.28% 75.23% -0.06%
==========================================
Files 35 35
Lines 1914 1914
==========================================
- Hits 1441 1440 -1
- Misses 330 331 +1
Partials 143 143
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broadly, I would suggest that we focus benchmarks on areas of code that we think may be particularly prone to CPU time and allocation.
Do we have any indication that the contentmanager, tokengetter, or schemebuilder are particularly problematic? If not, I'm concerned that these benchmarks could become a source of noise and toil.
In particular, I think the four best places to add benchmarks are:
- The catalog cache and/or client.
- The
Resolver
interface implementation. - The image unpacker.
- The registry+v1 to helm chart conversion.
In my observation, these seem to be the largest contributors to the duration of a Reconcile
call, so even performance regressions in these areas likely have a bigger impact that other areas of the codebase.
3b6cb69
to
50c5f61
Compare
Totally agree. I was intent on going through the code base and identifying where we have more CPU / memory intensive tasks. However, to start working on GH actions, I wanted to have something in place. |
50c5f61
to
eedef0e
Compare
.github/workflows/benchmark.yaml
Outdated
with: | ||
name: benchmark-results | ||
path: benchmark.txt | ||
overwrite: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still needs a step to compare the change in performance as a result of the PR .
In addition, what do we think would be a good threshold for a computing performance degradation
f632473
to
311fe2b
Compare
bundleSource := &source.BundleSource{ | ||
Type: source.SourceTypeImage, | ||
Image: &source.ImageSource{ | ||
Ref: "quay.io/eochieng/litmus-edge-operator-bundle@sha256:104b294fa1f4c2e45aa0eac32437a51de32dce0eff923eced44a0dddcb7f363f", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bundle image will also need to be changed to something that we expect to stay around longer.
While reviewing, consider:
|
Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
This reverts commit f8ae501.
Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
Parse the results of the GH actions and return output that has 20% decrease in performance. Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
Suppress logs from Unpack function which is causing errors to be written to the benchmark results. Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
Signed-off-by: Edmund Ochieng <[email protected]>
42fafc7
to
1075eb3
Compare
During a meeting on the benchmarking of operator-controller functions held on 08/13/2024, the following action items were identified:
|
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Will be revisited at a later time. |
Description
This PR is for implementing benchmarking tests which will be used in CI to evaluate change in performance / compute resource utilization as new features are implemented.
Reviewer Checklist
Relates to #920