Skip to content

An Ansible playbook to quickly setup KIND cluster with Ingress and DNS configured

License

Notifications You must be signed in to change notification settings

kamilrybacki/ServeMyKind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

77 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ServeMyKind - Batteries-included KinD cluster 🔋🐋

Serve My Kind logo

This Ansible playbook is designed to deploy a local Kind cluster with NGINX ingress controller and DNS configuration which allows to resolve the services by their FQDNs from the perspective of the host machine.

The resulting cluster is intended to be used for development and testing purposes, where the user can deploy applications and services and access them via the host machine's browser, similar to a production environment where services are accessed via a domain name.

Everything happens automagically, 🧙‍♂️ - the cluster-specific root CA is generated, attached to KinD nodes operating on the dedicated subnet, ingress configuration is applied, and DNS is configured on the host machine.

You can literally name Your cluster my.k00bernetes and access it via https://<service>.my.k00bernetes! 🚀

Requirements

Tasks defined in this playbook require sudo privileges within the shell environment in which they are executed.

For Ansible:

  • kubernetes.core collection

Usage

To install Kind cluster (if not present), apply NGINX ingress manifests and DNS configuration, and run the following command:

ansible-playbook -i ./environments/<ENVIRONMENT> install.yml

After running the playbook, all current Internet connections will be disconnected for a short period due to the DNS configuration being applied by restarting the NetworkManager service.

To change the group of target hosts, edit the install.yml file accordingly.

The default group of target hosts is set to: hosts: "all".

To remove the cluster and other assets on host machine (like certificates with their auto-untrusting), use the uninstall.yml playbook.

Assumptions

READ CAREFULLY BEFORE RUNNING THE PLAYBOOK!

  • The playbook is executed on a Linux machine with at least Python 3 installed:

    The playbook is designed to work on Linux machines, as it uses NetworkManager and dnsmasq to configure DNS. Also, all of the paths use the Linux filesystem structure, cba to make it work on Windows/Mac as those are not my OSes of choice. 😅

  • The machine has Docker, Ansible and Helm installed:

    I mean, just look at the tasks in the playbook: like 80% of them are about botching together a KinD cluster with somewhat sketchy manifests.

  • Networking is managed by a combo of NetworkManager and dnsmasq:

    The playbook will install dnsmasq and NetworkManager if they are not present on the host machine! This is required for the DNS configuration to work properly. (In the future, other DNS configuration methods may be supported.)

  • kube-proxy is DISABLED and CoreDNS is used as the DNS provider:

    The playbook is designed to work with Cilium CNI, which disables kube-proxy and uses CoreDNS as the DNS provider. This is required for the DNS configuration to work properly.

  • If You set the cluster name to my-cool-cluster the domain will be generated as my.cool.cluster:

    The domain is generated by replacing the hyphens in the cluster name with dots. This is done to make the domain more readable and to avoid issues with the domain name resolution.

Roles

  • cluster - installs Kind cluster and attached freshly generated CA certificate
  • cni - installs Cilium CNI, enables load balancing using MetalLB and installs NGINX ingress controller
  • dns - configures DNS for the cluster on host machine, modifying its NetworkManager and dnsmasq configuration files
  • ca - enables certificate generation for the cluster via cert-manager and installs the CA certificate on the host machine (trusts it across the system)
  • post - applies additional manifests to the cluster (e.g. Helm charts, custom resources, etc.)

Variables

Below is a list of variables used in the Ansible playbook. These variables can be customized to suit your setup:

Variable Name Description Default Value
serve_my_kind_cluster_name Name of the Kubernetes cluster "cluster-local"
serve_my_kind_kubeconfig_path Path to the kubeconfig file "{{ lookup('ansible.builtin.env', 'HOME') }}/.kube/config"
serve_my_kind_manifests_and_configs_path Directory where manifests and configurations are stored "/tmp"
serve_my_kind_networking_namespace Namespace for networking components "kube-networking"
serve_my_kind_network Name for the network associated with the cluster "{{ serve_my_kind_cluster_name }}-net"
serve_my_kind_network_cidr CIDR block for the cluster network "172.30.0.0/16"
serve_my_kind_default_ingress_class Default ingress class to use for the cluster "nginx"
serve_my_kind_configure_dns Boolean to enable DNS configuration for the cluster true
serve_my_kind_certification_namespace Namespace for certification management "kube-certs"
serve_my_kind_host_certificates_dir Directory for storing host certificates "/etc/ssl/certs"
serve_my_kind_ca_issuer_name Name of the Certificate Authority (CA) issuer "{{ serve_my_kind_cluster_name }}-ca"
serve_my_kind_certificate_days_validity Number of days the certificates are valid 365
serve_my_kind_hubble_enabled Boolean to enable Hubble observability true
serve_my_kind_hubble_relay_enabled Boolean to enable Hubble relay true
serve_my_kind_hubble_ui_enabled Boolean to enable Hubble UI true
serve_my_kind_hubble_metrics_enabled List of Hubble metrics to enable "dns", "drop", "tcp", "flow", "icmp", "http"
serve_my_kind_cluster_normal_workers Number of normal worker nodes in the cluster 2
serve_my_kind_cluster_special_nodes List of special nodes configurations []
serve_my_kind_cluster_enable_strict_arp Boolean to enable strict ARP mode in the cluster true
serve_my_kind_dns_network_manager_install Boolean to install NetworkManager if not present true
serve_my_kind_dns_dnsmasq_install Boolean to install dnsmasq if not present true
serve_my_kind_dns_enable_coredns_logging Boolean to enable CoreDNS logging true

Internal variables

There is also a plethora of internal variables (prefixed with _) that are omitted from this list. They are used for internal purposes and generally should not be modified.

Each role has its own set of variables, which are defined in the vars/main.yml file of the role.

Extra info

By default, all Ingress resources that enable TLS and do not specify the secret containing a certificate will use the generated root cert.

Information about CA secrets and cert-manager Issuers created via this playbook is stored in a ConfigMap located in the certification namespace under a name: "{{ serve_my_kind_cluster_name }}-cert-config".

If Hubble UI is enabled, it will be accessible via the host machine's browser at http://hubble.<CLUSTER DOMAIN>.

Example of Ingress resource

Assuming that the cluster name has been set to cluster-local and the resulting domain is cluster.local for the services, the following Ingress resource will be used to expose a service with the FQDN of example.service.cluster.local, which will be accessible via the host machine's browser:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: example-deployment
  labels:
    app: example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: example
  template:
    metadata:
      labels:
        app: example
    spec:
      containers:
      - name: example
        image: hashicorp/http-echo
        args:
        - "-text=hello world"
        ports:
        - containerPort: 5678
        resources:
          limits:
            cpu: "0.5"
            memory: "256Mi"
          requests:
            cpu: "0.1"
            memory: "128Mi"
---
apiVersion: v1
kind: Service
metadata:
  name: example-service
spec:
  selector:
    app: example
  ports:
  - protocol: TCP
    port: 80
    targetPort: 5678
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    cert-manager.io/cluster-issuer: cluster-local-ca
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - example.service.cluster.local
    secretName: example-service-tls
  rules:
  - host: example.service.cluster.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: example-service
            port:
              number: 80

After applying the above manifest, the service will be accessible via the host machine's browser at https://example.service.cluster.local, as seen in the picture below:

Example of Ingress resource

Resources