-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
handle the missing mlmd service-ca cert gracefully #728
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
3c2a742
to
c0694c4
Compare
When mlmd is being reconciled, it expects a secret with certs that is created by the service-ca (when podtopodtls is enabled). This is an expected outcome, so we should not be reporting stacktraces for this. This change instead catches this scenario via a custom error for such lagging dependencies, and logs it at info level without a stack trace. Signed-off-by: Humair Khan <[email protected]>
c0694c4
to
34fd75a
Compare
Change to PR detected. A new PR build was completed. |
1 similar comment
Change to PR detected. A new PR build was completed. |
@@ -313,7 +314,14 @@ func (r *DSPAReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl. | |||
err = r.ReconcileMLMD(ctx, dspa, params) | |||
if err != nil { | |||
r.setStatusAsNotReady(config.MLMDProxyReady, err, dspaStatus.SetMLMDProxyStatus) | |||
return ctrl.Result{}, err | |||
// TODO: this (and other components) should handle these scenarios via states or statuses instead of error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should link the existing issue here.
https://issues.redhat.com/browse/RHOAIENG-13321
/lgtm |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Is it possible to have a test for this? |
This will handle the following dspo log message more gracefully:
This is an expected state, and should not print a stack trace, but we still want to stacktrace other errors. So I've introduced a custom error for this scenario. In the future we should handle this via states for components, for example "state: loading" or "state: waitingOnDependency", instead of errors, but this should be approached consistently for all components, this pr is a simple fix until then.
Result: