-
Notifications
You must be signed in to change notification settings - Fork 366
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(manifests): disable --auto-gomemlimit for Prometheus on SNO unt… #2549
base: master
Are you sure you want to change the base?
Conversation
…il we can ensure it won't result in excessive CPU usage
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: machine424 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold |
@machine424: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since there is no ticket linked to this, I was wondering if we saw any instances of 10% memory reduction bottleneck-ing the CPU on SNO?
@@ -1491,7 +1491,7 @@ func (f *Factory) PrometheusK8s(grpcTLS *v1.Secret, telemetrySecret *v1.Secret) | |||
return p, nil | |||
} | |||
|
|||
func (f *Factory) setupGoGC(p *monv1.Prometheus) { | |||
func (f *Factory) adjustGoGCConfig(p *monv1.Prometheus) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe something like:
func (f *Factory) adjustGoGCConfig(p *monv1.Prometheus) { | |
func (f *Factory) adjustGoSettings(p *monv1.Prometheus) { |
Since this affects the GOMEMLIMIT too now.
for _, env := range c.Env { | ||
require.NotEqual(t, env.Name, "GOGC") | ||
} | ||
return | ||
} | ||
|
||
require.Contains(t, c.Env, v1.EnvVar{Name: "GOGC", Value: tc.exp}) | ||
require.Contains(t, c.Env, v1.EnvVar{Name: "GOGC", Value: tc.expectedGOGC}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+100!
require.Contains(t, c.Env, v1.EnvVar{Name: "GOGC", Value: tc.exp}) | ||
require.Contains(t, c.Env, v1.EnvVar{Name: "GOGC", Value: tc.expectedGOGC}) | ||
|
||
require.Equal(t, tc.autoGOMEMLIMITDisabled, argumentPresent(*c, "--no-auto-gomemlimit")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could drop the tc.autoGOMEMLIMITDisabled
field as this could be safely derived from ir.HighlyAvailableInfrastructure()
, as that's the only case where this is disabled for now (else enabled)?
require.Equal(t, tc.autoGOMEMLIMITDisabled, argumentPresent(*c, "--no-auto-gomemlimit")) | |
require.Equal(t, tc.ir.HighlyAvailableInfrastructure(), argumentPresent(*c, "--no-auto-gomemlimit")) |
I'm asking #2549 (review) as any observed insight should help me set a more meaningful buffer threshold in kubernetes-monitoring/kubernetes-mixin#1010 (comment). |
Thanks for the review, this is still WIP actually, requires openshift/prometheus#227. I've marked it as such. I'll get back to you later. |
…il we can ensure it won't result in excessive CPU usage
requires openshift/prometheus#227