Skip to content

Commit 194aa8e

Browse files
committed
section on new helper scripts 'cluster_init.sh' and 'join_cmd.sh', and some text changes (master -> control plane / cplane)
1 parent 7ef262b commit 194aa8e

File tree

1 file changed

+40
-15
lines changed

1 file changed

+40
-15
lines changed

README.md

Lines changed: 40 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -5,14 +5,14 @@
55
[![Kubernetes](https://img.shields.io/badge/kubernetes-%23326ce5.svg?style=for-the-badge&logo=kubernetes&logoColor=white)](https://kubernetes.io/)
66
[![Ubuntu](https://img.shields.io/badge/Ubuntu-E95420?style=for-the-badge&logo=ubuntu&logoColor=white)](https://ubuntu.com/)
77

8-
This project sets up a local Kubernetes cluster using Vagrant and VirtualBox. It creates two Ubuntu 22.04 virtual machines: one master node and one worker node with automatic installation of Docker, Kubernetes components, and necessary configurations.
8+
This project sets up a local Kubernetes cluster using Vagrant and VirtualBox. It creates two Ubuntu 22.04 virtual machines: one control plane node and one worker node with automatic installation of Docker, Kubernetes components, and necessary configurations.
99

1010
## Architecture
1111

1212
```mermaid
1313
graph TB
1414
subgraph Vagrant-Managed Environment
15-
subgraph Master Node
15+
subgraph Control Plane Node
1616
A[Control Plane] --> B[API Server]
1717
B --> C[etcd]
1818
B --> D[Controller Manager]
@@ -25,7 +25,7 @@ graph TB
2525
B <-.-> F
2626
B <-.-> H
2727
end
28-
style Master Node fill:#f9f,stroke:#333,stroke-width:2px
28+
style Control Plane Node fill:#f9f,stroke:#333,stroke-width:2px
2929
style Worker Node fill:#bbf,stroke:#333,stroke-width:2px
3030
```
3131

@@ -71,19 +71,19 @@ graph TB
7171

7272
## 🖥 Cluster Configuration
7373

74-
> **Note about IP Addressing**: This configuration uses `192.168.63.1` and `192.168.63.2` for the master and worker nodes respectively. You can modify these IPs in the `Vagrantfile` to use any IP addresses from your router's IP range that are outside the DHCP scope. Make sure to choose IPs that won't conflict with other devices on your network.
74+
> **Note about IP Addressing**: This configuration uses `192.168.63.1` and `192.168.63.2` for the control plane and worker nodes respectively. You can modify these IPs in the `Vagrantfile` to use any IP addresses from your router's IP range that are outside the DHCP scope. Make sure to choose IPs that won't conflict with other devices on your network.
7575
7676
<table>
7777
<tr>
78-
<th width="50%">Master Node</th>
78+
<th width="50%">Control Plane Node</th>
7979
<th width="50%">Worker Node</th>
8080
</tr>
8181
<tr>
8282
<td>
8383

8484
```yaml
8585
IP: 192.168.63.1
86-
Hostname: master
86+
Hostname: cplane
8787
Memory: 2048MB
8888
CPUs: 2
8989
Role: Control Plane
@@ -123,9 +123,9 @@ cd vagrant
123123
vagrant up
124124
```
125125

126-
3. SSH into the master node:
126+
3. SSH into the control plane node:
127127
```bash
128-
vagrant ssh master
128+
vagrant ssh cplane
129129
```
130130

131131
4. SSH into the worker node:
@@ -163,11 +163,11 @@ vagrant destroy
163163

164164
After the VMs are up and running, follow these steps to initialize your Kubernetes cluster:
165165

166-
### 1. On Master Node
166+
### 1. On Control Plane Node
167167

168-
First, log into the master node:
168+
First, log into the control plane node:
169169
```bash
170-
vagrant ssh master
170+
vagrant ssh cplane
171171
```
172172

173173
Pull required Kubernetes images:
@@ -187,13 +187,38 @@ After the cluster initialization, install Weave CNI:
187187
kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
188188
```
189189

190+
### NOTE: Control Plane script 'cluster_init.sh' wraps steps 1. and 2.
191+
192+
For ease of use, a single script `cluster_init.sh` was created as a function of the "vagrant up" command for the control plane(s) that performs all of the above steps:
193+
* k8s image pull
194+
* kubeadm init
195+
* local copy of "kube config"
196+
* Weave CNI install
197+
198+
First, log into the control plane node:
199+
```bash
200+
vagrant ssh cplane
201+
```
202+
203+
Run the Cluster Init Script:
204+
```bash
205+
./cluster_init.sh
206+
```
207+
190208
### 3. Join Worker Node
191209

192-
Copy the `kubeadm join` command from the master node's initialization output and run it on the worker node with sudo privileges.
210+
Copy the `kubeadm join` command from the control plane node's initialization output and run it on the worker node with sudo privileges.
211+
212+
### NOTE: Control Plane script 'join_cmd.sh' shows the 'join' command
213+
214+
For ease of use, script `join_cmd.sh` was created to display the join command for use on worker nodes with this vagrant command:
215+
```bash
216+
vagrant ssh cplane -c "./join_cmd.sh"
217+
```
193218

194219
### 4. Verify Cluster Status
195220

196-
After joining the worker node, verify the cluster status from the master node:
221+
After joining the worker node, verify the cluster status from the control plane node:
197222

198223
```bash
199224
# Check node status
@@ -203,7 +228,7 @@ kubectl get nodes
203228
Expected output (it may take a few minutes for the nodes to be ready):
204229
```
205230
NAME STATUS ROLES AGE VERSION
206-
master Ready control-plane 5m32s v1.30.x
231+
cplane Ready control-plane 5m32s v1.30.x
207232
worker Ready <none> 2m14s v1.30.x
208233
```
209234

@@ -228,7 +253,7 @@ sudo systemctl restart containerd
228253
sudo systemctl daemon-reload
229254
```
230255

231-
3. After cleanup, retry the cluster initialization on master and join command on worker.
256+
3. After cleanup, retry the cluster initialization on the cplane and join command on worker.
232257

233258
## Default Credentials
234259

0 commit comments

Comments
 (0)