Replies: 1 comment
-
Here's what ChatGPT came up with: Here’s a clean way to run Talos + Cilium (cluster-pool IPAM) in IPv4/IPv6 dual-stack for nodes, Pods and Services with public exposure. 1) Talos: dual-stack cluster CIDRs & kubelet node IPsPatch Talos machine configs to set dual-stack Pod/Service CIDRs and disable any default CNI; also guide kubelet to pick your public v4/v6 node IPs: # cluster network (applies to both controlplane & workers)
cluster:
network:
podSubnets:
- 10.244.0.0/16
- fd00:10:244::/56
serviceSubnets:
- 10.96.0.0/12
- fd00:10:96::/112
cni:
name: none
# kubelet node IP selection (per-machine, adjust to your node’s real subnets)
machine:
kubelet:
nodeIP:
validSubnets:
- 5.45.96.0/22 # example v4 for c1
- 2a03:4000:5:430::/64 Talos keys:
2) Install Cilium with dual-stack & cluster-pool IPAMUse Helm and pick cluster-pool (CRD-backed) IPAM; set your Pod CIDR pools. Enable IPv6 support. helm upgrade --install cilium cilium/cilium -n kube-system \
--set kubeProxyReplacement=true \
--set ipv6.enabled=true \
--set ipam.mode=cluster-pool \
--set ipam.operator.clusterPoolIPv4PodCIDRList='{10.244.0.0/16}' \
--set ipam.operator.clusterPoolIPv6PodCIDRList='{fd00:10:244::/56}' (You can also tune per-node mask sizes via
3) Pick routing mode for your VPS topologyYour nodes are on different public subnets, so use encapsulation unless you control upstream routing:
4) Expose Services on public IPsCilium’s LB IPAM allocates v4/v6 LoadBalancer IPs from pools you define; then announce them via BGP (cross-subnet) or L2 Announcements (same L2). apiVersion: cilium.io/v2
kind: CiliumLoadBalancerIPPool
metadata:
name: public-pool
spec:
blocks:
- cidr: "203.0.113.64/27" # example public IPv4 pool routed to your nodes
- cidr: "2001:db8:abcd:100::/120" # example public IPv6 pool LB IPAM is always on; it activates when a pool exists. ([Cilium Documentation]6) Advertise those IPs:
5) Create dual-stack ServicesUse the standard dual-stack fields: apiVersion: v1
kind: Service
metadata:
name: myapp
annotations:
io.cilium/lb-ipam-ips: "" # optional: let LB IPAM auto-assign
spec:
type: LoadBalancer
ipFamilyPolicy: PreferDualStack
ipFamilies: [IPv4, IPv6]
selector: { app: myapp }
ports:
- port: 80
targetPort: 8080 Kubernetes dual-stack Service fields: 6) Alternatives for cross-network Pod reachabilityIf you prefer to keep routingMode=native without tunnels, you can let Talos KubeSpan route Pod CIDRs across networks (Talos feature) and keep Cilium in native mode. ([TALOS LINUX]9) Quick checklist
Confidence: 7/10 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
At least for me they were or are big.
I wasted the half day yesterday and today trying to get a Talos cluster up and running.
At first it did well, then I wanted to add dual-stack v4v6.
And something got suck, so I thought, let's set the whole thing up again.
But this time I picked the v4 and v6 IPs from the beginning.
Oh boy, I was in for a world of hurt.
I'm running this at netcup and they don't have "cloud load balancers".
Also I went with cilium.
I wouldn't wish that to my worst enemy.
Nothing you do works, you have to manually type in the IPs over VNC and can't use CTRL+Q or CTRL+W.
You have to format the disk every time and that requires a password every time from the provider's side. Hurdles upon hurdles.
And in the end it was the dual-stack configuration.
After I removed every trace of IPv6 the installation, bootstrap, cilium install went through like butter.
However, I don't have a cloud load balancer, and a failover IP costs additional money.
I'm on a very tight budget.
Each node however has a /64 from the provider.
I don't know how to make it work.
I'm afraid, if I add an IPv6 address again and routing and resolvers, that the whole thing might crash and burn again.
Mostly I had issues with BackOff..something, 1 worker would work flawlessly, the other 2 would simply not become ready.
Any guidance how to make it work? Or is dual-stack a lost cause on Talos?
p.s. I haven't done anything with k8s in the last 6 months and forgot much.
Beta Was this translation helpful? Give feedback.
All reactions