Skip to content

Commit d79f01d

Browse files
Merge pull request #1 from jplandolt/multi-worker
Multi workers
2 parents 26f7a2d + 06a541e commit d79f01d

File tree

2 files changed

+207
-106
lines changed

2 files changed

+207
-106
lines changed

README.md

Lines changed: 40 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@
55
[![Kubernetes](https://img.shields.io/badge/kubernetes-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)](https://kubernetes.io/)
66
[![Ubuntu](https://img.shields.io/badge/Ubuntu-E95420?style=for-the-badge&logo=ubuntu&logoColor=white)](https://ubuntu.com/)
77

8-
This project sets up a local Kubernetes cluster using Vagrant and VirtualBox. It creates two Ubuntu 22.04 virtual machines: one master node and one worker node with automatic installation of Docker, Kubernetes components, and necessary configurations.
8+
This project sets up a local Kubernetes cluster using Vagrant and VirtualBox. It creates two Ubuntu 24.04 virtual machines: one control plane node and one worker node with automatic installation of Docker, Kubernetes components, and necessary configurations.
99

1010
## Architecture
1111

1212
```mermaid
1313
graph TB
1414
subgraph Vagrant-Managed Environment
15-
subgraph Master Node
15+
subgraph Control Plane Node
1616
A[Control Plane] --> B[API Server]
1717
B --> C[etcd]
1818
B --> D[Controller Manager]
@@ -25,7 +25,7 @@ graph TB
2525
B <-.-> F
2626
B <-.-> H
2727
end
28-
style Master Node fill:#f9f,stroke:#333,stroke-width:2px
28+
style Control Plane Node fill:#f9f,stroke:#333,stroke-width:2px
2929
style Worker Node fill:#bbf,stroke:#333,stroke-width:2px
3030
```
3131

@@ -45,7 +45,7 @@ graph TB
4545
<table>
4646
<tr>
4747
<td align="center">🔄</td>
48-
<td>Automated VM provisioning with Ubuntu 22.04</td>
48+
<td>Automated VM provisioning with Ubuntu 24.04</td>
4949
</tr>
5050
<tr>
5151
<td align="center">🌐</td>
@@ -71,19 +71,19 @@ graph TB
7171

7272
## 🖥 Cluster Configuration
7373

74-
> **Note about IP Addressing**: This configuration uses `192.168.63.1` and `192.168.63.2` for the master and worker nodes respectively. You can modify these IPs in the `Vagrantfile` to use any IP addresses from your router's IP range that are outside the DHCP scope. Make sure to choose IPs that won't conflict with other devices on your network.
74+
> **Note about IP Addressing**: This configuration uses `192.168.63.11` and `192.168.63.12` for the control plane and worker nodes respectively. You can modify these IPs in the `Vagrantfile` to use any IP addresses from your router's IP range that are outside the DHCP scope. Make sure to choose IPs that won't conflict with other devices on your network.
7575
7676
<table>
7777
<tr>
78-
<th width="50%">Master Node</th>
78+
<th width="50%">Control Plane Node</th>
7979
<th width="50%">Worker Node</th>
8080
</tr>
8181
<tr>
8282
<td>
8383

8484
```yaml
85-
IP: 192.168.63.1
86-
Hostname: master
85+
IP: 192.168.63.11
86+
Hostname: cplane
8787
Memory: 2048MB
8888
CPUs: 2
8989
Role: Control Plane
@@ -93,7 +93,7 @@ Role: Control Plane
9393
<td>
9494
9595
```yaml
96-
IP: 192.168.63.2
96+
IP: 192.168.63.12
9797
Hostname: worker
9898
Memory: 2048MB
9999
CPUs: 2
@@ -110,7 +110,7 @@ Role: Worker
110110
111111
## Quick Start
112112
113-
> **💡 Tip**: Before starting, you may want to adjust the IP addresses in the `Vagrantfile` if the default IPs (`192.168.63.1, 192.168.63.2`) conflict with your network setup. Edit the `private_network` IP settings in the Vagrantfile to match your network requirements.
113+
> **💡 Tip**: Before starting, you may want to adjust the IP addresses in the `Vagrantfile` if the default IPs (`192.168.63.11, 192.168.63.12`) conflict with your network setup. Edit the `private_network` IP settings in the Vagrantfile to match your network requirements.
114114

115115
1. Clone this repository:
116116
```bash
@@ -123,9 +123,9 @@ cd vagrant
123123
vagrant up
124124
```
125125

126-
3. SSH into the master node:
126+
3. SSH into the control plane node:
127127
```bash
128-
vagrant ssh master
128+
vagrant ssh cplane
129129
```
130130

131131
4. SSH into the worker node:
@@ -163,11 +163,11 @@ vagrant destroy
163163

164164
After the VMs are up and running, follow these steps to initialize your Kubernetes cluster:
165165

166-
### 1. On Master Node
166+
### 1. On Control Plane Node
167167

168-
First, log into the master node:
168+
First, log into the control plane node:
169169
```bash
170-
vagrant ssh master
170+
vagrant ssh cplane
171171
```
172172

173173
Pull required Kubernetes images:
@@ -177,7 +177,7 @@ sudo kubeadm config images pull
177177

178178
Initialize the cluster:
179179
```bash
180-
sudo kubeadm init --pod-network-cidr=10.201.0.0/16 --apiserver-advertise-address=192.168.63.1
180+
sudo kubeadm init --pod-network-cidr=10.201.0.0/16 --apiserver-advertise-address=192.168.63.11
181181
```
182182

183183
### 2. Install CNI (Container Network Interface)
@@ -187,13 +187,33 @@ After the cluster initialization, install Weave CNI:
187187
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
188188
```
189189

190+
### NOTE: Control Plane script 'cluster_init.sh' wraps steps 1. and 2.
191+
192+
For ease of use, a single script `cluster_init.sh` was created as a function of the "vagrant up" command for the control plane(s) that performs all of the above steps:
193+
* k8s image pull
194+
* kubeadm init
195+
* local copy of "kube config"
196+
* Weave CNI install
197+
198+
and can be run with the vagrant command:
199+
```bash
200+
vagrant ssh cplane -c "./cluster_init.sh"
201+
```
202+
190203
### 3. Join Worker Node
191204

192-
Copy the `kubeadm join` command from the master node's initialization output and run it on the worker node with sudo privileges.
205+
Copy the `kubeadm join` command from the control plane node's initialization output and run it on the worker node with sudo privileges.
206+
207+
### NOTE: Control Plane script 'join_cmd.sh' shows the 'join' command
208+
209+
For ease of use, script `join_cmd.sh` was created to display the join command for use on worker nodes with this vagrant command:
210+
```bash
211+
vagrant ssh cplane -c "./join_cmd.sh"
212+
```
193213

194214
### 4. Verify Cluster Status
195215

196-
After joining the worker node, verify the cluster status from the master node:
216+
After joining the worker node, verify the cluster status from the control plane node:
197217

198218
```bash
199219
# Check node status
@@ -203,7 +223,7 @@ kubectl get nodes
203223
Expected output (it may take a few minutes for the nodes to be ready):
204224
```
205225
NAME STATUS ROLES AGE VERSION
206-
master Ready control-plane 5m32s v1.30.x
226+
cplane Ready control-plane 5m32s v1.30.x
207227
worker Ready <none> 2m14s v1.30.x
208228
```
209229

@@ -228,7 +248,7 @@ sudo systemctl restart containerd
228248
sudo systemctl daemon-reload
229249
```
230250

231-
3. After cleanup, retry the cluster initialization on master and join command on worker.
251+
3. After cleanup, retry the cluster initialization on the cplane and join command on worker.
232252

233253
## Default Credentials
234254

0 commit comments

Comments
 (0)