-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use same ALB for same app deployed twice on 2 EKS clusters within the same AWS account. #4004
Comments
What I did to resolve this issue was to create the ALB myself via Terraform and add the tag |
We are facing the same challenges. In our case, we want to use two clusters to enable safer Kubernetes version updates. One workaround is to use an Ingress to create the load balancer in one cluster and, in the other, rely solely on TargetGroupBinding with the multi-cluster feature to add the new pods to the Target Groups created by the AWS Load Balancer Controller. The issue with this approach is that we lose the ability to modify listener rules in the second cluster, as there's no Ingress there. This becomes especially problematic when using Argo Rollouts, where adjusting ALB listener rules within the cluster is particularly useful for traffic routing in the rollout. |
This seems dangerous and potentially not supported. How are you making sure that the configuration set by one controller is not overridden by the other?
I agree that using Multicluster is the way to solve this problem. Unfortunately, the mechanism of modifying the listener rules is not something readily available. We have no way of ensuring that both controllers are on the same page to know what the "real" state of the listener should be. |
Even with the workaround provided by @guilhermefxs, there is a big issue. Imagine you create first a blue cluster. Then you create the very same services within the green cluster. You will not create an ingress for your green service and rely on MultiCluster feature and use TargetGroupBinding. Problem is : if you delete the blue cluster, so will be the target group created for the service. So the green service does not have a target group to be tied to. Some people create the LB & the target group out of AWS LB controller but I think we lose any interest in the controller itself if we start creating everything out of it. We should have a feature where it's possible to :
|
Is your feature request related to a problem?
I have created 2 EKS clusters within the same AWS account. I would like to apply blue/green deployments to update one cluster after another. My goal is to have only 1 ALB which will route traffic to both clusters. Each cluster will have the same instance of the very same application. AWS LB controller will create 2 target groups and I would like to have those target groups with the same LB listener rule. The problem that I have today is when I create the second instance of my app in green cluster, I get the error :
From what I have found, the aws lb controller looks for the tag
elbv2.k8s.aws/cluster
in the ALB. It is waiting for a value which is the name of the cluster. Problem is when you have 2 cluster, blue & green, and AWS lb controller on both cluster wait for different tags on the unique ALB, it is a problem.Describe the solution you'd like
NA
Describe alternatives you've considered
NA
The text was updated successfully, but these errors were encountered: