-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy with ipvs and lc does not work as expected (perhaps a conflict with flannel rules) #10522
Comments
Just out of curiosity, does this still happen if you disable traefik or servicelb? I think I remember hearing something about the ipvs entries that kube-proxy adds for the traefik LoadBalancer service causing problems in some environments. |
I have been playing for the last 3 hours with this issue and I got it working finally using:
Disabling traefik combined with setting non-default cidr did the trick. |
I suspect that you'll run into the same problem any time you use ServiceLB, since kube-proxy will add IPVS VIPs for the remote nodes IPs to each node (since those are the loadbalancer IPs advertised by ServiceLB), and that probably breaks communication with those remote nodes. |
Environmental Info:
K3s Version: v1.29.6+k3s2 (b4b156d)
Node(s) CPU architecture, OS, and Version:
Linux us-central-43abs 6.1.0-22-cloud-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.94-1 (2024-06-21) x86_64 GNU/Linux
Cluster Configuration:
3 servers in HA mode
Describe the bug:
I want to setup a HA cluster over tailscale using IPVS instead of IPTABLES for kube-proxy. When I setup the initial cluster node, the startup works fine and the cluster starts. After setting up the second node, it connects to the cluster but after flannel sets the IPTABLES rules (eventhough kube-proxy is using IPVS) the connection stops and k3s output shows that it is not able to reach the master node anymore. A simple ping to the master node does not even work anymore, probably due to conflicting iptables rules.
Steps To Reproduce:
Run the following on all nodes to enable ipvs and ipvs least conn load balancing:
Setup the master node
Setup the second node
See that the node connects in the logs and directly after flannel sets its iptable rules, see that all further communciation just blocks. Do a simple ping test and see that it also not works anymore.
Expected behavior:
I expect that the nodes connect and stay available with correct iptable rules.
Actual behavior:
Some iptable rules from flannel prevent or conflict with the ipvs rules.
Additional context / logs:
The text was updated successfully, but these errors were encountered: