How Can We Integrate Ironic Network into Kubernetes Ecosystem #1739
Replies: 3 comments 2 replies
-
Hostnetworking also is a matter of improving Ironic security. It prevents using PSS as it violates all standards besides privileged, and also makes Ironic API exposed in host's network which might be accessible to actors not supposed to have access (local unprivileged users, same network nodes, ...) |
Beta Was this translation helpful? Give feedback.
-
I have created a small PoC as part of the Short-term focus section to demonstrate running Ironic without hostNetwork, limited to the VirtualMedia use case, using a nodePort service. You can find the PR here: PoC Hostnetworkless Ironic with VirtualMedia Using NodePorts #1433. I replaced keepalived with manual commands to add ironic external address to the ironicendpoint branch and remove it during pivoting. We need a better solution to ensure Ironic remains reachable even if the node where the branch is configured goes down. Your feedback and ideas would be greatly appreciated as we continue to refine and improve this approach. |
Beta Was this translation helpful? Give feedback.
-
I wanted to update you on our progress. I've created a new PoC to demonstrate running Ironic without hostNetwork, limited to the VirtualMedia use case, using a LoadBalancer service with MetalLB. You can find the PR here: PoC Hostnetworkless Ironic with VirtualMedia Using MetalLB LoadBalancer |
Beta Was this translation helpful? Give feedback.
-
Manage Ironic Natively in Metal3 Kubernetes Management Cluster
Currently, Ironic network is not managed in a Kubernetes-native way because of the requirements associated with each Ironic service. Additionally,the network is not a first-class concept in Kubernetes! This integration between the incompatible ironic needs and Kubernetes network model is where metal3 potentially can shine and demonstrate its strengths.
This discussion sheds light on the challenging areas and highlights the corner cases that may arise with any approach towards improving integration. It also provides an open space for brainstorming, discussing all proposed ideas, and demonstrating small PoCs.
Goals
Overview of the Problem
Kubernetes uses a single network to connect all pods, meaning each pod is managed through one interface. However, the Metal3 network model requires two separate networks: the provisioning network (Ironic network) and the external network (Kubernetes network), also known as the baremetal network.
Due to this, Metal3 manages the provisioning network in a custom way that Kubernetes is not aware of. Currently, Metal3 needs an additional NIC on every node, connected to a bridge (named ironicendpoint). Ironic pods require host networking to manage this endpoint. Metal3 includes Keepalived in the Ironic pods to attach a VIP to this bridge on the node hosting Ironic, ensuring Ironic is always reachable by a static IP, even when moved to another node or cluster. Additionally, for PXE provisioning, Metal3 uses dnsmasq within the Ironic stack to provide DHCP services to baremetal nodes, and this service needs to be in the same baremetal nodes LAN, which is another requirement for Ironic pods to access host networking.
Main Problematic Areas
Managing Two Separate Networks:
Discoverability of Ironic:
Providing DHCP Services:
Long-term Vision
Our long-term vision is to provide a Kubernetes native solution for managing baremetal hosts via a provisioning stack that is also running on Kubernetes. By addressing our issues with a focus on this mission, we aim to create a comprehensive roadmap that integrates Metal3 components seamlessly into the Kubernetes ecosystem.
List of Proposed Ideas
Meta CNI Solution:
Custom Load Balancer:
httpd
, inspired by the OpenShift proxy, and run Ironic as pods managed by the native Kubernetes network. This is being discussed in the Issue #21Short-term Focus
PoC reach ironic from k8s LoadBalancer service:
Experiment with Meta CNI (e.g., Multus):
For investigation
Beta Was this translation helpful? Give feedback.
All reactions