Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPAM Controller not creating VIP custom resource when using annotation service.citrix.com/frontend-ip #674

Open
philipp1992 opened this issue Dec 12, 2024 · 4 comments

Comments

@philipp1992
Copy link

Describe the bug
We create a service that should use a static vip via the annotation service.citrix.com/frontend-ip: .
This works and the service loadbalancer is reachable.
Unfortunately, no VIP CR is created and therefore we had the issue, that the IPAM controller didnt know this ip was already in use and assigned it to another service loadbalancer, thus creating a conflict.

When we first create the vip cr manually and then omit the annotation on the service, it works as expected.

To Reproduce

  1. Steps
kind: Service
apiVersion: v1
metadata:
  name: test-lb
  namespace: xxxx
  annotations:
    service.citrix.com/frontend-ip: xxx
spec:
  externalTrafficPolicy: Cluster
  ipFamilies:
    - IPv4
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: acme-http
    - name: https
      protocol: TCP
      port: 443
      targetPort: https
  internalTrafficPolicy: Cluster

  allocateLoadBalancerNodePorts: true
  type: LoadBalancer
  ipFamilyPolicy: SingleStack
  sessionAffinity: None
  selector:
    app.kubernetes.io/instance: xxxx
  1. Version of the NetScaler Ingress Controller
    ingress-controller: 1.39.6
    ipam-controller: 1.0.3

  2. Version of MPX/VPX/CPX
    NS14.1 25.56.nc

Expected behavior
A VIP being created so that the ip can not be reused for other services.

Logs

kubectl logs

ipam controller isnt logging any errors or information in this case

kind regards
Philipp

@arijitr-citrix
Copy link
Collaborator

Hi @philipp1992 , This is expected behavior. While using IPAM controller, you should not give static IP in the annotation:
service.citrix.com/frontend-ip: xxx
Instead use the below annotation:
service.citrix.com/ipam-range: ''

It is a general practice to not use IPs that conflicts with a IP range given to IPAM controller. You have to use the annotation to use IPAM controller. Currently this is supported in listener, Ingress and Service.

@philipp1992
Copy link
Author

we primarly use this, when we want to ensure the serivce type loadbalancer has the same ip after deleting the service.
so the correct way would be to add service.citrix.com/ipam-range: 192.168.10.10 ? even if the .10 is part of the ipam range, it wont be used for other services?

@arijitr-citrix
Copy link
Collaborator

Hi @philipp1992 , if you want to have the same IP for the service even it is deleted and comes up then you have to use the frontend-ip annotation as you were using. That is not use case for IPAM controller.
Please find more details here: https://docs.netscaler.com/en-us/netscaler-k8s-ingress-controller/configure/annotations.html

Can you please explain your use case for IPAM controller? Please note that IPAM controller is used to allocate IP to Service, Ingress or Listener from a range of IPs. In order for optimal utilization of resources, IPs are not held for any resource if they are deleted.
https://docs.netscaler.com/en-us/netscaler-k8s-ingress-controller/configure/ipam-for-ingress

@philipp1992
Copy link
Author

we have ipam controller and a set range of ips e.g 192.168.10.10 to 192.168.10.20

User creates a service type loadbalancer and it gets the ip 192.168.10.12 from ipam as VIP -> user notes down that ip and deletes the service for whatever reason.
User wants to create a new service with the same ip.
We expected we can use the frontend-ip annotation and the ipam controller will see this ip as being used. its not the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants