ARMadillo is a "pet project", designed to provide a way to deploy Kubernetes cluster on Raspberry Pi in both a single and a multi-master topologies, all using simple bash scripts and leveraging the native capabilities of kubeadm.
This repo provide the software stack of ARMadillo. For an overview of the hardware stack and the build out process, please visit the ARMadillo page on my personal blog.
- Download the Raspbian OS zip image. ARMadillo was tested working on both raspbian stretch and raspbian buster lite.
-
The LAN the Pi's are connected to needs to be DHCP-enabled.
-
Flashing the Pi and the deploy Raspbian is easy. First, download and install balenaEtcer.
- Insert SD card to your SD card reader.
- Select the Raspbian zip file you've just downloaded.
- Select the SD card and hit the "Flash!".
- Once flashing is done, re-insert the SD card to your SD card reader (as balenaEtcer will unmount the SD card)
- Create ssh file and copy it to the /boot partition. This is required to be able ssh the Pi.
- Insert the card back to the Pi and power it on.
- Repeat these steps for each Pi in your cluster.
-
Fork this repo :-)
-
The env_vars.sh file is the most important file as it will the determine the environment variables for either the single or multi-master deployment. Based on your deployment, edit the env_vars.sh file, commit & push the changes to your forked ARMadillo repo.
- For multi-master deployment, edit the
ARMadillo/deploy/multi_master/env_vars.sh
file. - For single master deployment, edit the
ARMadillo/deploy/single_master/env_vars.sh
file.
Note: ARMadillo deployment scripts source the env_vars file arguments upon execution. The env_vars edit is a one-time edit.
- To make things a lot easier for you, edit your local hosts file where you will connect to the PI's from and add the HAProxy, masters and workers nodes hostname/IP based on the changes you just made to the env_vars file.
Note: For multi master deployment, ARMadillo supports more then 3 master and more/less then 2 worker nodes. To add more master nodes and add/subtract worker nodes, simply edit these in the ARMadillo/deploy/multi_master/env_vars.sh file.
-
SSH to the masters and workers nodes using the default raspberry password and clone the ARMadillo github repository you've just forked.
sudo apt-get install git -qy && git clone https://github.com/<your github username>/ARMadillo.git
If you are using private github repository, a personal access token is required. After generating the token, edit the ARMadillo/scripts/git_clone/git_clone_private.sh and use it to clone the repository.
-
Run the "haproxy_config_hosts.sh" script and wait for the host to restart.
./ARMadillo/deploy/multi_master/haproxy_config_hosts.sh
-
From your local environment, test successful login to the HAProxy node using the new hostname/IP and the username/password you previously allocated.
- SSH to the masters and workers nodes and run the perquisites script.
Note: This step can ~15min per node BUT it's safe run the perquisites in parallel on each master/worker
- On MASTER01 run:
./ARMadillo/deploy/multi_master/master01_perquisites.sh
- On MASTER02 run:
./ARMadillo/deploy/multi_master/master02_perquisites.sh
- On MASTER03 run:
./ARMadillo/deploy/multi_master/master03_perquisites.sh
- On WORKER01 run:
./ARMadillo/deploy/multi_master/worker01_perquisites.sh
- On WORKER02 run:
./ARMadillo/deploy/multi_master/worker02_perquisites.sh
Before moving on to the next step, wait for all masters and workers nodes to restart.
At this point, the HAProxy and all nodes should be able to ping one another using it's respective hostname/IP.
- On the HAProxy Pi, run the deployment and certificates generation script.
./ARMadillo/deploy/multi_master/haproxy_install.sh
- On MASTER01 only, run the kubeadm initialization script. The script will:
- Create and will join the first node as MASTER01 to k8s cluster.
- Will remotely do the same for all the master and worker nodes.
./ARMadillo/deploy/multi_master/master01_kubeadm_init.sh
Once the script run has finished, the k8s cluster will be up and running.
-
SSH to the master and workers nodes using the default raspberry password and clone the ARMadillo github repository you've just forked.
sudo apt-get install git -qy && git clone https://github.com/<your github username>/ARMadillo.git
-
Run the perquisites script on the master and each of the workers nodes.
Note: This step can ~15min per node BUT it's safe run the perquisites in parallel on each master/worker
- On MASTER01 run:
./ARMadillo/deploy/single_master/master01_perquisites.sh
- On WORKER01 run:
./ARMadillo/deploy/single_master/worker01_perquisites.sh
- On WORKER02 run:
./ARMadillo/deploy/single_master/worker02_perquisites.sh
- On WORKER03 run:
./ARMadillo/deploy/single_master/worker03_perquisites.sh
- On WORKER04 run:
./ARMadillo/deploy/single_master/worker04_perquisites.sh
- On WORKER05 run:
./ARMadillo/deploy/single_master/worker05_perquisites.sh
Before moving on to the next step, wait for all masters and workers nodes to restart.
At this point, all nodes should be able to ping one another using it's hostname/IP.
- On the only master node, run the kubeadm initialization script. The script will:
- Create and will join the first node as MASTER01 to k8s cluster.
- Will remotely do the same for all the worker nodes.
./ARMadillo/deploy/single_master/master01_kubeadm_init.sh
Once the script run has finished, the k8s cluster will be up and running.
To gracefully shutdown a Raspberry Pi, use the sudo /sbin/shutdown -hP now
command. Also included in this repo is a remote shutdown script which you can use from your local environment for both the multi and single master deployment.
./scripts/graceful_shudown/multi_master/graceful_shudown.sh
./scripts/graceful_shudown/single_master/graceful_shudown.sh
To reset kubeadm and k8s cluster deployment, run the kubeadm cleanup script on each master(s) and worker(s) node. The script will reset kubeadm, delete all artifacts and "trail" files, delete the old ARMadillo repo from the Pi and will eventually clone the most up-to-date ARMadillo repo.
./ARMadillo/scripts/kubeadm_cleanup/kubeadm_cleanup.sh
If you are using a private github repository, use the dedicated cleanup script.
./ARMadillo/scripts/kubeadm_cleanup/kubeadm_cleanup_private_repo
For the cleanup script to work, you will need to execute it from the Pi /home/pi directory.