-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add leader election to the backend #847
base: main
Are you sure you want to change the base?
Conversation
|
||
operationsScanner.Join() | ||
go func() { | ||
sig := <-signalChannel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please just use signal.NotifyContext
on this thing? Didn't we already talk about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I want to log the signal received along with a timestamp so I can observe how much longer it takes the process to shut down. I don't see any big advantage to signal.NotifyContext
; it's a convenience wrapper that does the same thing I'm doing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is signalChannel
just for your logger? Then the following is simpler:
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
go func() {
<-ctx.Done()
logger.Info("Caught interrupt signal")
}()
And the only difference is you can't tell if you got SIGINT or SIGTERM (why do you care? in production, the kubelet only ever sends SIGTERM or SIGKILL (ref), so this seems like a distinction without much meaning). Code is much less Rube-Goldberg. IDK. I respect you think this logging of the specific signal that is sent is somehow important but in a decade of writing k8s-native applications it has never been useful or required in the past.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as big advantage, I did want to mention - using a more complex cascade of handlers and goroutines when it's not necessary is a downside for sure. Making the simple choice every time you can helps make this maintainable ten years down the line.
Allows the backend deployment to be scaled up but still have only one instance polling at a time.
Because we can now that aro-hcp-backend uses leader election.
66227b5
to
93c1217
Compare
What this PR does
Adds leader election to the backend deployment using the k8s.io/client-go/tools/leaderelection module so it can be horizontally scaled but still restrict active polling to one pod at a time.
Jira: No Jira, added by request
Link to demo recording:
Special notes for your reviewer
The leader election module emits unstructured log messages through klog, so I had to catch these messages and try to adapt them to the standard library structured logger, which required some futzing. Please enlighten me if there's a better way to do this.
I went ahead and bumped the backend replica count to 2 just to prove it works in dev environments.
We should add a
/healthz
endpoint to the backend at some point. The leader election module offers healthz integration for when it fails to renew the leader lease, but that's a pull request for another day.