-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: continue using and drain serving endpointslices during termination #4946
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks!
@fogninid why should traffic be routed to a pod during graceful termination? Imo with the right rolling update strategy in place, traffic should be shifted to the newer pod |
@arkodg the My expectations are for envoy-gateway to behave in the same way as other networking solutions. Most explicit mentions I found about this logic are close to what cilium is doing here. envoy-gateway could do similar, using all endpoints with state Would this be the right place to introduce this logic? |
thanks for sharing the link @fogninid, found this blog https://kubernetes.io/blog/2022/12/30/advancements-in-kubernetes-traffic-engineering/#traffic-loss-from-load-balancers-during-rolling-updates which highlights how it can be used by Ingress Controllers
This would translate to, if an endpoint is not Ready, but Serving, we can set it the lb endpoint status to wdyt @envoyproxy/gateway-maintainers / @envoyproxy/gateway-reviewers |
The description in that link for the behavior of a load balancer is too simplified for my taste. There is no 1:1 mapping from a single field of the EndpointConditions to the expected lb state of the matching endpoint, but rather the full list of endpoints and the combination of Serving,Terminating should map to a set of lb endpoints with appropriate states. Again my main reasoning is that new connections should not be dropped if any Serving=true endpoint is currently available (irrespective its termination), this is the same as described for kube-proxy in this quote from that link:
Based on the enovy config link you provided, I believe the
|
thanks for creating the detailed table, it looks good except for this row [{Address1,Serving=false,Terminating=true} ,{Address2,Serving=true,Terminating=false}] if
to implement this
|
thanks for the hints, I will give a shot at implementing that way. |
thanks @fogninid, prefer a |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #4946 +/- ##
==========================================
- Coverage 66.73% 66.66% -0.07%
==========================================
Files 210 210
Lines 32609 32615 +6
==========================================
- Hits 21761 21743 -18
- Misses 9537 9554 +17
- Partials 1311 1318 +7 ☔ View full report in Codecov by Sentry. |
@arkodg can you have a look at the latest changes? I think they now match your suggestions. I am hacking together an e2e test to convince myself that the implementation is correct together with envoyproxy. |
thanks @fogninid ! the PR looks good, added some comments to improve maintenance of the code |
Signed-off-by: Daniele Fognini <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks !
Continue routing to endpoints while their graceful termination is in progress
What this PR does / why we need it:
use
serving
condition that was defined in kubernetes/kubernetes#92968Release Notes: Yes