Skip to content

Kubernetes-native declarative infrastructure for agent based installation

License

Notifications You must be signed in to change notification settings

openshift/cluster-api-provider-agent

 
 

Repository files navigation

cluster-api-provider-agent

Kubernetes-native declarative infrastructure for agent-based installation.

cluster-api-provider-agent serves as infrastructure provider for Kubernetes cluster-api

How to install cluster-api-provider-agent

cluster-api-provider-agent is deployed into an existing OpenShift / Kubernetes cluster.

Prerequisites:

  • Admin access to the OpenShift / Kubernetes cluster specified by the KUBECONFIG environment variable

Installing with clusterctl

Add the agent provider to clusterctl configuration file (located at $HOME/.cluster-api/clusterctl.yaml).

providers:
  - name: "agent"
    url: "https://github.com/openshift/cluster-api-provider-agent/releases/latest/infrastructure-components.yaml"
    type: "InfrastructureProvider"

Set up the provider:

clusterctl init --infrastructure agent

Installing from source

Build the cluster-api-provider-agent image and push it to a container image repository:

make docker-build docker-push IMG=<your docker repository>:`git log -1 --short`

Deploy cluster-api-provider-agent to your cluster:

make deploy IMG=<your docker repository>:`git log -1 --short`

Installing namespace-scoped cluster-api-provider-agent

You can configure the provider to watch and manage AgentClusters and AgentMachines in a single Namespace. You can also configure the provider to watch and use Agents from a different namespace. Build the cluster-api-provider-agent image and push it to a container image repository:

Deploy cluster-api-provider-agent to your cluster:

make deploy IMG=<your docker repository>:`git log -1 --short` WATCH_NAMESPACE=foo AGENTS_NAMESPACE=bar

In case you are using namespace-scoped provider you will need to create Role-Based Access Control (RBAC) permissions to give the provider permissions to access the resources see example here

Design

cluster-api-provider-agent utilizes the Infrastructure Operator for adjusting the amount of workers in an OpenShift cluster. The CRDs that it manages are:

  • AgentCluster
  • AgentMachine
  • AgentMachineTemplate

High-level flow

  1. Using the Infrastructure Operator, create an InfraEnv suitable for your environment.
  2. Download the Discovery ISO from the InfraEnv's download URL and use it to boot one or more hosts. Each booted host will automatically run an agent process which will create an Agent CR in the InfraEnv's namespace.
  3. Approve Agents that you recognize and set any necessary properties (e.g., hostname, installation disk).
  4. When a new Machine is created, the CAPI provider will find an available Agent to associate with the Machine, and trigger its installation via the Infrastructure Operator.

Drawbacks

ProviderID controller

In order to get the cluster machine approver to approve the kubelet-serving CSRs the cluster-api-provider-agent implements a provider-id controller that set the ProviderID from the appropriate Machine and add it to the Node's Spec. This controller is a workaround in order to allow hands free e2e add hosts, it should be replaced by similar logic in the Infrastructure Operator

Status

Until now this CAPI provider has been tested for resizing HyperShift Hosted Clusters. It also has the following limitations:

  • The re-provisioning flow currently requires manually rebooting the host with the Discovery ISO.
  • The CAPI provider does not yet have cluster lifecycle features - it adds and removes nodes from an existing cluster.
  • The CAPI provider currently selects the first free Agent that is approved and whose validations are passing. It will be smarter in the future.

About

Kubernetes-native declarative infrastructure for agent based installation

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 94.0%
  • Makefile 5.7%
  • Dockerfile 0.3%