Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Session affinity not working correctly #624

Open
mattiamondini opened this issue Dec 6, 2023 · 7 comments
Open

Session affinity not working correctly #624

mattiamondini opened this issue Dec 6, 2023 · 7 comments

Comments

@mattiamondini
Copy link

Setting up session affinity following the guide https://docs.netscaler.com/en-us/citrix-k8s-ingress-controller/how-to/session-affinity.html creates the correct configuration on Netscaler but traffic is still loadbalanced with round robin on kubernetes side. this cause application that requires session affinity not working properly.

To Reproduce
Steps:

  1. creat an app that returns pod name like:
    `from flask import Flask
    import pprint
    import os

class LoggingMiddleware(object):
def init(self, app):
self._app = app

def __call__(self, env, resp):
    errorlog = env['wsgi.errors']
    pprint.pprint(('REQUEST', env), stream=errorlog)

    def log_response(status, headers, *args):
        pprint.pprint(('RESPONSE', status, headers), stream=errorlog)
        return resp(status, headers, *args)

    return self._app(env, log_response)

app = Flask(name)

@app.route('/')
def hello_world():
return f"Hello from {os.environ['HOSTNAME']}"

if name == 'main':
app.wsgi_app = LoggingMiddleware(app.wsgi_app)
app.run(host='0.0.0.0', port=8080)
`

2.set up the app with services and ingress like:
`apiVersion: v1
kind: Service
metadata:
name: myapp-frontend
spec:
type: NodePort
ports:

  • port: 80
    targetPort: 8080
    selector:
    app: myapp-frontend

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-frontend
annotations:
ingress.citrix.com/preconfigured-certkey : '{"certs": [ {"name": "mycert", "type": "sni"} ] }'
ingress.citrix.com/lbvserver: '{"myapp-frontend":{"persistenceType":"SOURCEIP", "timeout":"10"}}'
spec:
tls:

  • secretName:
    rules:
  • host: "myapp-dev.k8s-test.it.cobra.group"
    http:
    paths:
    • path: /
      pathType: Prefix
      backend:
      service:
      name: myapp-frontend
      port:
      number: 80
      `

Ingres controller version is 1.31.3 with Netscaler VPX version NS13.0 91.13.nc

Expected behavior
traffic is balanced according the stickiness policy defined in the ingress annotations

@apoorvak-citrix
Copy link
Contributor

@mattiamondini
I've noticed that you've the service "myapp-frontend" as type: NodePort to be exposed through ADC.
By default, ingress controller configures the nodeIP as backend endpoints in the service groups.
Thus Enabling persistence on the ADC only ensures that requests from the same clientIP are directed to the same Kubernetes node and not the pod.

Are your Netscaler and Kubernetes cluster nodes in the same network?

@mattiamondini
Copy link
Author

@apoorva-05
yes, they are in the same network but in different subnets (not all ports ar visible, but only a specified range).

Thus Enabling persistence on the ADC only ensures that requests from the same clientIP are directed to the same Kubernetes node and not the pod.

i've understood this, but on other ingress controllers i read that in case of session affinity configuration , the affinity is propagated also in teh service balance in some ways.

there's a way to achieve session affinity from ingress to pod with teh netscaler ingress controller?

@apoorvak-citrix
Copy link
Contributor

apoorvak-citrix commented Dec 7, 2023

@mattiamondini

It's possible for services with type: ClusterIP. In this case, the Ingress controller directly exposes the podIP on the Netscaler VPX/MPX. And your current annotation should be enough to maintain session affinity to the pod.

However, we'll explore whether there's a way to achieve this for the NodePort service itself.

@mattiamondini
Copy link
Author

@apoorva-05
we tried using type: ClusterIP at first, but probably due to the fact that nestcaler cannot access to Kubernetes internal network we were not able to have a working ingress, we thought that for this kind of Services a dual dual tier topology was needed.
Also type: ClusterIP services are not described in Deployment Topologies documentation.

@apoorvak-citrix
Copy link
Contributor

@mattiamondini
If both ADC and kubernetes cluster are on the same subnet, You can expose the service of type: ClusterIP via an ingress similar to how you exposed the nodeport service.
Also, when deploying the ingress controller enable the feature-node-watch, so that the controller can add static routes on the ADC to reach the backend pods. (Ref: Link )

  1. If you are deploying ingress controller directly via yaml manifests set the following in the args section:
    - --feature-node-watch true
  2. If you are deploying via helm charts set the following in values.yaml
    nodeWatch: true

If you're interested, we'd be delighted to schedule a call to delve deeper into your topology to explore and recommend the optimal deployment solution tailored to your specific use case.
CC: @ankits123 @dheerajng @subashd

@mattiamondini
Copy link
Author

@apoorva-05
our ADC is not on the same subnet, but we acan open a port range from Adc to kubernetes cluster subnet if needed.

having a meeting would be great, improve the solution or any suggestion is appreciated.

@subashd
Copy link
Collaborator

subashd commented Sep 24, 2024

hi @mattiamondini,
Please connect with us via email [email protected].

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants