Replies: 1 comment
-
I do wonder if metallb could warn about this, or even hard error if it detects this scenario. This cost me 2 hours of time. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
This isn’t a question, but I searched for so long so I'll share the solution !
After a fully compliant and functional implementation of MetalLB in my Talos cluster, my LoadBalancer type services (as opposed to ClusterIP) were receiving an IP from the defined pool (controller pod and pool OK) and the allocated ports were working on the nodes’ IPs but not on the assigned external IP. I also didn’t see any errors in the speakers pods logs.
The MetalLB documentation places a lot of emphasis before installation on ensuring that 'enable strict ARP mode' is set if kube-proxy is used in VIPS mode (which is not the case for Talos by default, as it uses nftable). I spent a huge amount of time investigating that direction, assuming it was related given the prominence of that warning.
In the end, it turned out to be because I use control plane nodes that also serve as workers in my homelab cluster. By default, control plane nodes have the label:
node.kubernetes.io/exclude-from-external-load-balancers
and metalLB, respecting that, does not advertise external IPs on these nodes. You just need to remove that label or configure the metalLB speakers to ignore it.
I misread the troubleshooting documentation and overlooked this:
https://metallb.universe.tf/troubleshooting/#metallb-is-not-advertising-my-service-from-my-control-plane-nodes-or-from-my-single-node-cluster
and no AI had ever suggested to me that the issue could be due to this setting.
Beta Was this translation helpful? Give feedback.
All reactions