Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use same ALB for same app deployed twice on 2 EKS clusters within the same AWS account. #4004

Open
d3vpasha opened this issue Jan 7, 2025 · 4 comments

Comments

@d3vpasha
Copy link

d3vpasha commented Jan 7, 2025

Is your feature request related to a problem?
I have created 2 EKS clusters within the same AWS account. I would like to apply blue/green deployments to update one cluster after another. My goal is to have only 1 ALB which will route traffic to both clusters. Each cluster will have the same instance of the very same application. AWS LB controller will create 2 target groups and I would like to have those target groups with the same LB listener rule. The problem that I have today is when I create the second instance of my app in green cluster, I get the error :

DuplicateLoadBalancerName: A load balancer with the same name 'eks-mycluster' exists, but with different settings"}

From what I have found, the aws lb controller looks for the tag elbv2.k8s.aws/cluster in the ALB. It is waiting for a value which is the name of the cluster. Problem is when you have 2 cluster, blue & green, and AWS lb controller on both cluster wait for different tags on the unique ALB, it is a problem.

Describe the solution you'd like
NA

Describe alternatives you've considered
NA

@d3vpasha
Copy link
Author

What I did to resolve this issue was to create the ALB myself via Terraform and add the tag elbv2.k8s.aws/cluster: shared. Then when I instanciate the AWS lb controller, I use the input --set clusterName=shared. So both AWS lb controllers look for the same ALB. I don't like this solution and hope there is a better way for doing that.

@guilhermefxs
Copy link

guilhermefxs commented Jan 10, 2025

We are facing the same challenges. In our case, we want to use two clusters to enable safer Kubernetes version updates.

One workaround is to use an Ingress to create the load balancer in one cluster and, in the other, rely solely on TargetGroupBinding with the multi-cluster feature to add the new pods to the Target Groups created by the AWS Load Balancer Controller. The issue with this approach is that we lose the ability to modify listener rules in the second cluster, as there's no Ingress there. This becomes especially problematic when using Argo Rollouts, where adjusting ALB listener rules within the cluster is particularly useful for traffic routing in the rollout.

@zac-nixon
Copy link
Collaborator

What I did to resolve this issue was to create the ALB myself via Terraform and add the tag elbv2.k8s.aws/cluster: shared. Then when I instanciate the AWS lb controller, I use the input --set clusterName=shared. So both AWS lb controllers look for the same ALB. I don't like this solution and hope there is a better way for doing that.

This seems dangerous and potentially not supported. How are you making sure that the configuration set by one controller is not overridden by the other?

One workaround is to use an Ingress to create the load balancer in one cluster and, in the other, rely solely on TargetGroupBinding with the multi-cluster feature to add the new pods to the Target Groups created by the AWS Load Balancer Controller. The issue with this approach is that we lose the ability to modify listener rules in the second cluster, as there's no Ingress there. This becomes especially problematic when using Argo Rollouts, where adjusting ALB listener rules within the cluster is particularly useful for traffic routing in the rollout.

I agree that using Multicluster is the way to solve this problem. Unfortunately, the mechanism of modifying the listener rules is not something readily available. We have no way of ensuring that both controllers are on the same page to know what the "real" state of the listener should be.

@d3vpasha
Copy link
Author

Even with the workaround provided by @guilhermefxs, there is a big issue. Imagine you create first a blue cluster. Then you create the very same services within the green cluster. You will not create an ingress for your green service and rely on MultiCluster feature and use TargetGroupBinding. Problem is : if you delete the blue cluster, so will be the target group created for the service. So the green service does not have a target group to be tied to. Some people create the LB & the target group out of AWS LB controller but I think we lose any interest in the controller itself if we start creating everything out of it. We should have a feature where it's possible to :

  • 2 EKS clusters can share the same ALB
  • 2 services deployed on 2 EKS clusters can share the same LB rule but with different target groups (e.g. 2 different target groups that points to the same URL)
  • 2 services deployed on 2 EKS clusters can share the same LB rule & the same target group
    All the idea behind this is to have a smooth blue/green deployments for switching from one EKS cluster to another.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants