|
1 |
| ---- |
2 |
| -MTPE: WANG0608GitHub |
3 |
| -Date: 2024-08-13 |
4 |
| ---- |
5 |
| - |
6 |
| -# Cross-Cluster Application Communication |
| 1 | +# Cilium Network Communication Acceleration |
7 | 2 |
|
8 | 3 | ## Introduction
|
9 | 4 |
|
10 |
| -As microservices processes evolve, many enterprises choose to deploy multiple Kubernetes (K8s) clusters |
11 |
| -in order to meet the needs of application isolation, high availability/disaster tolerance, and operations management. |
12 |
| -However, such multicluster deployments pose a problem where some applications depend on microservices |
13 |
| -in other K8s clusters and need to implement cross-cluster communication. Specifically, a pod in |
14 |
| -one cluster needs to access a pod or Service in another cluster. |
| 5 | +This page describes how to configure Cilium's communication acceleration capability. There are two optional configuration methods. |
15 | 6 |
|
16 | 7 | ## Prerequisites
|
17 | 8 |
|
18 |
| -Please make sure the Linux Kernel version >= v4.9.17 with v5.10+ recommended. To view and install the latest version, you can do the following: |
| 9 | +Please make sure the Linux Kernel version >= 4.9.17 with 5.10+ recommended. To view and install the latest version, you can do the following: |
19 | 10 |
|
20 | 11 | 1. To view the current kernel version:
|
21 | 12 |
|
@@ -44,258 +35,61 @@ Please make sure the Linux Kernel version >= v4.9.17 with v5.10+ recommended. To
|
44 | 35 | grub2-mkconfig -o /boot/grub2/grub.cfg
|
45 | 36 | ```
|
46 | 37 |
|
47 |
| -## Create Clusters |
48 |
| - |
49 |
| -> For more information on creating clusters, see [Creating Clusters](../../../kpanda/user-guide/clusters/create-cluster.md). |
50 |
| - |
51 |
| -1. Create two clusters with different names, cluster01 and cluster02. |
52 |
| - |
53 |
| -  |
| 38 | +> Note that manually updating the kernel may be risky and should be done with caution in production environments. |
54 | 39 |
|
55 |
| - - Choose Cilium as the CNI plugin for cluster01. |
56 |
| - - Add two parameters, `cluster-id` and `cluster-name`. |
57 |
| - - Use the default configuration for other items. |
| 40 | +## First method:configure when creating the cluster |
58 | 41 |
|
59 |
| -2. Follow the same steps to create cluster02. |
| 42 | +1. Click `Container Management` --> `Clusters`. On the page of `Create Cluster`, enter the cluster's `basic information` and `node configuration`, then go to `Network Configuration` to configure as follows: |
| 43 | + |
60 | 44 |
|
61 |
| -  |
| 45 | + - Select `cilium` for the cluster's CNI plugin |
62 | 46 |
|
63 |
| - > The container and service segments used by the two clusters must not overlap. The values of |
64 |
| - > the two parameters must not conflict to identify the clusters uniquely and avoid conflicts for cross-cluster communication. |
| 47 | + - Add `other parameters` as follows: |
65 | 48 |
|
66 |
| -## Create a Service for API Server |
| 49 | + ```yaml |
| 50 | + # Auto direct node routes must be set to be true, otherwise cross-node traffic cannot be routed |
| 51 | + cilium_auto_direct_node_routes: "true" |
| 52 | + # If masquerading is used, it will replace the iptables implementation based on eBPF. |
| 53 | + # Require kernel 5.10 and later. |
| 54 | + # Otherwise it will be downgraded to the iptables implementation even if enabled |
| 55 | + cilium_enable_bpf_masquerade: "true" |
| 56 | + # When doing source address translation for Pod access to outside traffic, enable it if using tunnel mode. |
| 57 | + # Disable it if BGP is used to connect to the physical network. |
| 58 | + cilium_enable_ipv6_masquerade: "false" |
| 59 | + # Disable the ability for hosts to bypass their kernel stack when processing packets to speed up data forwarding. |
| 60 | + # Enable it by default, but fallback to legacy behavior if the host kernel does not support it. |
| 61 | + cilium_enable_host_legacy_routing: "false" |
| 62 | + # Turn on bandwidth-manager to improve the performance of tcp, udp |
| 63 | + cilium_enable_bandwidth_manager: "true" |
| 64 | + # Kube-proxy replacement feature can be enabled after removing kube-proxy component |
| 65 | + cilium_kube_proxy_replacement: strict |
| 66 | + # Disable tunnel mode |
| 67 | + cilium_tunnel_mode: disabled |
| 68 | + # (optional) bbr network blocking control, with the requirement of kernel > 5.18 |
| 69 | + cilium_enable_bbr: "true" |
| 70 | + ``` |
67 | 71 |
|
68 |
| -1. After the cluster is created, create a Service on each of the two clusters to expose API server for that cluster. |
| 72 | + - Use the default ConfigMaps for everything else |
69 | 73 |
|
70 |
| -  |
| 74 | +1. Click `Create Cluster` to complete the creation. |
71 | 75 |
|
72 |
| -  |
| 76 | +## Second method: modify Cilium ConfigMaps |
73 | 77 |
|
74 |
| - - Choose NodePort as the access type for external access for cluster01. |
75 |
| - - Choose kube-system as the namespace of API Server. |
76 |
| - - Use label selectors to filter API Server components, allowing you to view the selectors associated with the API Server. |
77 |
| - - Configure the access port of the Service, and the container port is 6443. |
78 |
| - - Get the external access link for the Service. |
| 78 | +If the cluster has been created and you need to enable acceleration parameters, you can modify the `Cilium-Config` file. |
79 | 79 |
|
80 |
| -2. Create a Service for API Server on cluster02 in the same way. |
| 80 | +Click `Container Management`--> `Clusters`. Go to the created cluster and click `ConfigMaps & Secrets`. Select `Config Items`, find `Cilium-config`, and then click Edit to enter the following acceleration parameters: |
81 | 81 |
|
82 |
| -  |
| 82 | + |
83 | 83 |
|
84 |
| -  |
| 84 | + |
85 | 85 |
|
86 |
| -## Modify cluster configuration |
| 86 | +Once the parameters are written, Check if the following ENV exists for cilium agent Daemonset (containers for cilium-agent), if not, you need to add it manually: |
87 | 87 |
|
88 |
| -Edit the `kubeconfig` files for cluster01 and cluster02 through the `vi` command. |
89 |
| - |
90 |
| -```bash |
91 |
| -vi $HOME/.kube/config |
| 88 | +``` |
| 89 | + - name: KUBERNETES_SERVICE_HOST |
| 90 | + value: <YOUR_K8S_APISERVER_IP |
| 91 | + - name: KUBERNETES_SERVICE_PORT |
| 92 | + value: “6443” |
92 | 93 | ```
|
93 | 94 |
|
94 |
| -1. Add new `cluster`, `context`, and `user` information to both cluster01 and cluster02. |
95 |
| - |
96 |
| - - Under `clusters`, add new `cluster` information: the original CA for both clusters remains unchanged; |
97 |
| - the new `server` address is changed to the address of the API Server Service that you have created above; |
98 |
| - and the `name` is changed to the names of the two clusters themselves, namely cluster01 and cluster02. |
99 |
| - |
100 |
| - > The address of the API Server Service can be found or copied from the DCE5.0 page, which requires to use the https protocol. |
101 |
| - |
102 |
| - - Add new `context` information to `contexts`: change the values of the `name`, `user`, and `cluster` fields |
103 |
| - for the clusters in `context` to the names of the two clusters themselves, namely cluster01 and cluster02. |
104 |
| - |
105 |
| - - Add new `user` information to `users`: the two clusters copy their original credential |
106 |
| - and change the name to the names of the two clusters namely cluster01 and cluster02. |
107 |
| - |
108 |
| -2. Add the `cluster`, `context`, and `user` information to each other's clusters. |
109 |
| -
|
110 |
| - The following is a yaml example of how to do this: |
111 |
| -
|
112 |
| - ```yaml |
113 |
| - clusters: |
114 |
| - - cluster: #Add the cluster01's `cluster` information |
115 |
| - certificate-authority-data: {{cluster01}} |
116 |
| - server: https://{{https://10.6.124.66:31936}} |
117 |
| - name: {{cluster01 }} |
118 |
| - - cluster: #Add the cluster02's `cluster` information |
119 |
| - certificate-authority-data: {{cluster02}} |
120 |
| - server: https://{{https://10.6.124.67:31466}} |
121 |
| - name: {{cluster02}} |
122 |
| - ``` |
123 |
| -
|
124 |
| - ```yaml |
125 |
| - contexts: |
126 |
| - - context: #Add the cluster01's `context` information |
127 |
| - cluster: {{cluster01 name}} |
128 |
| - user: {{cluster01 name}} |
129 |
| - name: {{cluster01 name}} |
130 |
| - - context: #Add the cluster02's `context` information |
131 |
| - cluster: {{cluster02 name}} |
132 |
| - user: {{cluster02 name}} |
133 |
| - name: {{cluster02 name}} |
134 |
| - current-context: [email protected] |
135 |
| - ``` |
136 |
| -
|
137 |
| - ```yaml |
138 |
| - users: |
139 |
| - - name: {{cluster01}} #Add the cluster01's `user` information |
140 |
| - user: |
141 |
| - client-certificate-data: {{cluster01 certificate-data}} |
142 |
| - client-key-data: {{cluster01 key-data}} |
143 |
| - - name: {{cluster02}} #Add the cluster02's `user` information |
144 |
| - user: |
145 |
| - client-certificate-data: {{cluster02 certificate-data}} |
146 |
| - client-key-data: {{cluster02 key-data}} |
147 |
| - ``` |
148 |
| -
|
149 |
| -## Configure cluster connectivity |
150 |
| -
|
151 |
| -Run the following commands to verify cluster connectivity: |
152 |
| -
|
153 |
| -1. Run the following commands on cluster01: |
154 |
| -
|
155 |
| - ```bash |
156 |
| - cilium clustermesh enable --create-ca --context cluster01 --service-type NodePort |
157 |
| - ``` |
158 |
| -
|
159 |
| -2. Run the following command to enable `clustermesh` on cluster02: |
160 |
| -
|
161 |
| - ```bash |
162 |
| - cilium clustermesh enable --create-ca --context cluster02 --service-type NodePort |
163 |
| - ``` |
164 |
| -
|
165 |
| -3. Establish connectivity on cluster01: |
166 |
| -
|
167 |
| - ```bash |
168 |
| - cilium clustermesh connect --context cluster01 --destination-context cluster02 |
169 |
| - ``` |
170 |
| -
|
171 |
| -4. The presence of both `connected cluster1 and cluster2!` on cluster01 and `ClusterMesh enabled!` |
172 |
| - on cluster02 indicates that both clusters are connected. |
173 |
| -
|
174 |
| -  |
175 |
| -
|
176 |
| -  |
177 |
| -
|
178 |
| -## Create a demo application |
179 |
| -
|
180 |
| -1. Use the [rebel-base](https://github.com/cilium/cilium/blob/main/examples/kubernetes/clustermesh/global-service-example/cluster1.yaml) application provided in the Cilium docs, and copy the following yaml file: |
181 |
| -
|
182 |
| - ```yaml |
183 |
| - apiVersion: apps/v1 |
184 |
| - kind: Deployment |
185 |
| - metadata: |
186 |
| - name: rebel-base |
187 |
| - spec: |
188 |
| - selector: |
189 |
| - matchLabels: |
190 |
| - name: rebel-base |
191 |
| - replicas: 2 |
192 |
| - template: |
193 |
| - metadata: |
194 |
| - labels: |
195 |
| - name: rebel-base |
196 |
| - spec: |
197 |
| - containers: |
198 |
| - - name: rebel-base |
199 |
| - image: docker.io/nginx:1.15.8 |
200 |
| - volumeMounts: |
201 |
| - - name: html |
202 |
| - mountPath: /usr/share/nginx/html/ |
203 |
| - livenessProbe: |
204 |
| - httpGet: |
205 |
| - path: / |
206 |
| - port: 80 |
207 |
| - periodSeconds: 1 |
208 |
| - readinessProbe: |
209 |
| - httpGet: |
210 |
| - path: / |
211 |
| - port: 80 |
212 |
| - volumes: |
213 |
| - - name: html |
214 |
| - configMap: |
215 |
| - name: rebel-base-response |
216 |
| - items: |
217 |
| - - key: message |
218 |
| - path: index.html |
219 |
| - --- |
220 |
| - apiVersion: v1 |
221 |
| - kind: ConfigMap |
222 |
| - metadata: |
223 |
| - name: rebel-base-response |
224 |
| - data: |
225 |
| - message: "{\"Galaxy\": \"Alderaan\", \"Cluster\": \"Cluster-1\"}\n" # Change Cluster-1 to the name of Cluster01 |
226 |
| - --- |
227 |
| - apiVersion: apps/v1 |
228 |
| - kind: Deployment |
229 |
| - metadata: |
230 |
| - name: x-wing |
231 |
| - spec: |
232 |
| - selector: |
233 |
| - matchLabels: |
234 |
| - name: x-wing |
235 |
| - replicas: 2 |
236 |
| - template: |
237 |
| - metadata: |
238 |
| - labels: |
239 |
| - name: x-wing |
240 |
| - spec: |
241 |
| - containers: |
242 |
| - - name: x-wing-container |
243 |
| - image: quay.io/cilium/json-mock:v1.3.3@sha256:f26044a2b8085fcaa8146b6b8bb73556134d7ec3d5782c6a04a058c945924ca0 |
244 |
| - livenessProbe: |
245 |
| - exec: |
246 |
| - command: |
247 |
| - - curl |
248 |
| - - -sS |
249 |
| - - -o |
250 |
| - - /dev/null |
251 |
| - - localhost |
252 |
| - readinessProbe: |
253 |
| - exec: |
254 |
| - command: |
255 |
| - - curl |
256 |
| - - -sS |
257 |
| - - -o |
258 |
| - - /dev/null |
259 |
| - - localhost |
260 |
| - ``` |
261 |
| -
|
262 |
| -2. Quickly create two applications for cluster01 and cluster02 in DCE 5.0 using yaml file. |
263 |
| -
|
264 |
| -  |
265 |
| -
|
266 |
| - Modify the contents of `ConfigMap` so that the data returned is labeled with the names of cluster01 |
267 |
| - and cluster02, respectively when you access a Service in cluster01 and cluster02. The pod labels |
268 |
| - can be found in the `rebel-base` application. |
269 |
| -
|
270 |
| -3. Create a Service for a global service video in each of the two clusters, which points to the created |
271 |
| - `rebel-base` application. |
272 |
| -
|
273 |
| -  |
274 |
| -
|
275 |
| -  |
276 |
| -
|
277 |
| - - Service type is ClusterIP |
278 |
| - - Add the application pod labels to filter the proper application |
279 |
| - - Configure the port |
280 |
| - - Add an annotation to make the current Service effective globally. |
281 |
| -
|
282 |
| - > When creating a service for cluster02, the service name must be the same for both clusters, |
283 |
| - > The two clusters must locate in the same namespace, and have the same port name and global annotation. |
284 |
| -
|
285 |
| -## Cross-cluster communication |
286 |
| -
|
287 |
| -1. Check the pod IP of the application in cluster02. |
288 |
| -
|
289 |
| - <!-- add image later --> |
290 |
| -
|
291 |
| -2. On the page of cluster01 details, click __Pod__ -> __Console__ of rebel-base , and then curl the Pod |
292 |
| - IP of cluster02's rebel-baseand, and successfully return the information from cluster02 indicating that the pods in two |
293 |
| - clusters can communicate with each other. |
294 |
| -
|
295 |
| - <!-- add image later --> |
296 |
| -
|
297 |
| -3. Check the service name of cluster01. Click __Pod__ -> __Console__ of rebel-base in cluster02, |
298 |
| - then curl the proper service name of cluster01. Some of the returned content is from cluster01, |
299 |
| - which means that the pods and Services in the two clusters can also communicate with each other. |
300 |
| -
|
301 |
| - <!-- add image later --> |
| 95 | +Restart the Cilium agent pods after adding them. |
0 commit comments