-
Notifications
You must be signed in to change notification settings - Fork 153
/
infrastructure.yaml
324 lines (319 loc) · 16.7 KB
/
infrastructure.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
## This file should run with our kubectl shell command to substitute and replace environment variable with `envsubst`, so run this command, `./kubectl apply -f ./infrastructure.yaml`
# - It is a recommended practice to put resources related to the same microservice or application tier into the same file, and to group all of the files associated with your application in the same directory
# - kubernetes supports `substitute` and `replace` environment variable in kubernetes resource files but it only works for `env` section of our resource, actually we can use `$(VAR_NAME)` in the `value` of `env` attribute in the resource file. but we want to use environment variables in all attribute so we use `envsubst` approach. Then define these `environment variables` either by defining them in the `shell session` (will destroy after closing shell) or save them to a file (e.g. `.env`) and then our resource will substitute with `envsubst < input.tmpl > output.text` (it is also possible to write your substitution to a new file). It is possible to pipe the output into other commands like `less` or `kubectl` for Kubernetes for example `envsubst < deploy.yml | kubectl apply -f -`.
# - we can skip the company.com `prefix` if we don’t intend to `distribute` our resources outside of our company (as long as we don’t expect a `naming conflict` with another third-party package installed in our environment using the `same label` without a `prefix`).
# - we can visualize and manage Kubernetes objects with more tools than kubectl and the dashboard. A common set of labels allows tools to work interoperable, describing objects in a common manner that all tools can understand.
# - Shared labels and annotations share a common prefix: `app.kubernetes.io`. Labels `without` a prefix are `private` to users. The shared prefix ensures that shared labels do not `interfere` with `custom user labels`.
# - find resources with label selector `kubectl get pods -l environment=production,tier=frontend`
# https://kubernetes.io/docs/tasks/inject-data-application/define-interdependent-environment-variables/
# https://skofgar.ch/dev/2020/08/how-to-quickly-replace-environment-variables-in-a-file/
# https://blog.8bitbuddhism.com/2022/11/12/how-to-use-environment-variables-in-a-kubernetes-manifest/
# https://blog.kubecost.com/blog/kubernetes-labels/
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
# https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
# https://kubernetes.io/docs/reference/labels-annotations-taints/
# https://kubernetes.io/docs/concepts/cluster-administration/manage-deployment/#using-labels-effectively
# kubernetes.io/docs/concepts/cluster-administration/manage-deployment/
# www.datree.io/resources/kubernetes-error-codes-field-is-immutable
# https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
#######################################################
# RabbitMQ
#######################################################
# https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
# https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq-deployment-${ENVIRONMENT:-dev}
namespace: food-delivery
labels:
app.kubernetes.io/name: rabbitmq
app.kubernetes.io/component: message-queue
app.kubernetes.io/instance: rabbitmq-food-delivery
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
spec:
# creates a ReplicaSet that creates 1 replicated Pods
replicas: 1
# https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
# The `.spec.selector` field in deployment defines how the created ReplicaSet finds which Pods to `manage`. In this case, we use set-based label selector to select matchLabels labels, and finding pods for these labels (pods labels defined in the pod `template` section)
selector:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements
# matchLabels is a Set-based label selector
# `matchLabels` is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of `matchExpressions`, whose key field is "key", the operator is "In", and the values array contains only "value".
matchLabels:
app.kubernetes.io/name: rabbitmq
app.kubernetes.io/component: message-queue
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# template uses for defining pod template
# https://kubernetes.io/docs/concepts/workloads/pods/
template:
# Pod metadata and its labels
metadata:
# Pod labels
labels:
app.kubernetes.io/name: rabbitmq
app.kubernetes.io/instance: rabbitmq-food-delivery
app.kubernetes.io/component: message-queue
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# Pod template's specification
spec:
containers:
- name: rabbitmq
image: rabbitmq:management
# https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy: IfNotPresent
ports:
- containerPort: ${RABBITMQ_PORT:-5672}
name: rabbitmq
- containerPort: ${RABBITMQ_API_PORT:-15672}
name: rabbitmq-api
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
# service doesn't route and send traffic to pod until its `readinessProbe` becomes `success` and pod goes to `ready` state
readinessProbe:
tcpSocket:
port: ${RABBITMQ_PORT:-5672}
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 1
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command
livenessProbe:
tcpSocket:
port: ${RABBITMQ_PORT:-5672}
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
#https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
#https://github.com/kubernetes/kubernetes/issues/24725
restartPolicy: Always
---
# https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
# `kubectl port-forward` allows using resource name, such as a `pod name` or `service name`, and forwards its port to `local port` or actually `http://127.0.0.1:<Forward_Port>` or `http://localhost:<Forward_Port>` or localhost ping -> `[::1]:<Forward_Port>`
# for port forwarding rabbitmq ui service that is cluster ip we can use `kubectl port-forward service/rabbitmq-service-dev 15672:15672 -n food-delivery` and allow firewall to open `15672` port, this will proxy and forward traffic to a local port (http://127.0.0.1:15672 or http://localhost:15672 or localhost ping http://[::1]:15672)
# we can create a ingress route for this also --> https://blog.knoldus.com/how-to-deploy-rabbit-mq-on-kubernetes/
# https://kubernetes.io/docs/concepts/services-networking/service/
apiVersion: v1
kind: Service
metadata:
name: rabbitmq-service-${ENVIRONMENT:-dev}
namespace: food-delivery
labels:
app.kubernetes.io/name: rabbitmq
app.kubernetes.io/instance: rabbitmq-food-delivery
app.kubernetes.io/component: message-queue
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
spec:
type: ClusterIP
# The set of Pods targeted by a Service is usually determined by a selector
# https://kubernetes.io/docs/concepts/services-networking/service/#services-in-kubernetes
selector:
app.kubernetes.io/name: rabbitmq
food-delivery.mehdi.io/tier: backend
app.kubernetes.io/component: message-queue
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
ports:
- name: rabbitmq
protocol: TCP
port: ${RABBITMQ_HOST_PORT:-5672}
# Port definitions in Pods have names, and you can reference these names in the targetPort attribute of a Service
targetPort: rabbitmq
- name: rabbitmq-api
protocol: TCP
# service port
port: ${RABBITMQ_HOST_API_PORT:-15672}
# pod container port - Port definitions in Pods have names, and you can reference these names in the targetPort attribute of a Service
targetPort: rabbitmq-api
---
# https://kubernetes.io/docs/concepts/services-networking/ingress/
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rabbitmq-ingress-${ENVIRONMENT:-dev}
namespace: food-delivery
labels:
app.kubernetes.io/name: rabbitmq
app.kubernetes.io/instance: rabbitmq-food-delivery
app.kubernetes.io/component: message-queue
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: local-services # Replace with your domain name
http:
paths:
# rabbit-ui doesn't work with sub path --> https://github.com/docker-library/rabbitmq/issues/249
- path: /
pathType: Prefix
backend:
service:
name: rabbitmq-service-${ENVIRONMENT:-dev}
port:
name: rabbitmq-api
#######################################################
# MongoDB
#######################################################
# https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/
# https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongo-deployment-${ENVIRONMENT:-dev}
namespace: food-delivery
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: database
app.kubernetes.io/instance: mongo-food-delivery
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
spec:
# creates a ReplicaSet that creates 1 replicated Pods
replicas: 1
# https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment
# The `.spec.selector` field in deployment defines how the created ReplicaSet finds which Pods to `manage`. In this case, we use set-based label selector to select matchLabels labels, and finding pods for these labels (pods labels defined in the pod `template` section)
selector:
# https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#resources-that-support-set-based-requirements
# matchLabels is a Set-based label selector
# `matchLabels` is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of `matchExpressions`, whose key field is "key", the operator is "In", and the values array contains only "value".
matchLabels:
app.kubernetes.io/name: mongo
app.kubernetes.io/component: database
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# template uses for defining pod template
# https://kubernetes.io/docs/concepts/workloads/pods/
template:
# Pod metadata and its labels
metadata:
# Pod labels
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/instance: mongo-food-delivery
app.kubernetes.io/component: database
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# Pod template's specification
spec:
containers:
- name: mongo
image: mongo
env:
- name: MONGO_INITDB_ROOT_USERNAME
value: ${MONGO_USER:-admin}
- name: MONGO_INITDB_ROOT_PASSWORD
value: ${MONGO_PASS:-admin}
ports:
- containerPort: ${MONGO_PORT:-27017}
name: mongo
# https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
imagePullPolicy: IfNotPresent
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes
# service doesn't route and send traffic to pod until its `readinessProbe` becomes `success` and pod goes to `ready` state
readinessProbe:
tcpSocket:
# container port name - https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#use-a-named-port
port: mongo
initialDelaySeconds: 15
periodSeconds: 10
# https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command
livenessProbe:
exec:
command:
- mongo
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 20
#https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy
#https://github.com/kubernetes/kubernetes/issues/24725
restartPolicy: Always
---
# https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
# `kubectl port-forward` allows using resource name, such as a `pod name` or `service name`, and forwards its port to `local port` or actually `http://127.0.0.1:<Forward_Port>` or `http://localhost:<Forward_Port>` or localhost ping -> `[::1]:<Forward_Port>`
# kubectl port-forward service/mongo-service-dev 27017:27017 -n food-delivery
# we can create a ingress route for this also
# https://kubernetes.io/docs/concepts/services-networking/service/
apiVersion: v1
kind: Service
metadata:
name: mongo-service-${ENVIRONMENT:-dev}
namespace: food-delivery
labels:
app.kubernetes.io/name: mongo
app.kubernetes.io/instance: mongo-food-delivery
app.kubernetes.io/component: database
app.kubernetes.io/part-of: food-delivery
app.kubernetes.io/managed-by: kubernetes
food-delivery.mehdi.io/tier: backend
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
spec:
type: ClusterIP
# The set of Pods targeted by a Service is usually determined by a selector
# https://kubernetes.io/docs/concepts/services-networking/service/#services-in-kubernetes
selector:
app.kubernetes.io/name: mongo
food-delivery.mehdi.io/tier: backend
app.kubernetes.io/component: database
food-delivery.mehdi.io/environment: ${ENVIRONMENT:-dev}
# https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services
ports:
- name: mongo
protocol: TCP
port: ${MONGO_HOST_PORT:-27017}
# Port definitions in Pods have names, and you can reference these names in the targetPort attribute of a Service
targetPort: mongo
---
# https://doc.traefik.io/traefik/routing/providers/kubernetes-crd/#kind-ingressroutetcp
# https://wirywolf.com/2022/07/postgresql-ingress-using-traefik-kubernetes-k3s.html
# https://community.traefik.io/t/adding-entrypoints-to-a-helm-deployed-traefik-on-k3s/14813
# https://community.traefik.io/t/tcp-on-kubernetes/1528/18
# https://github.com/traefik/traefik/issues/7112
# if we want to access mongo with traefik tcp route we should apply a `mongo` entrypoint inner our patch-value for traefik-helm, and now we can access to the node ip with `27017` tcp port --> <Node_Ip>:27017
# valuesContent: |-
# ports:
# mongo:
# port: 27017
# expose: true
# protocol: TCP
# exposedPort: 27017
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongo-ingress-${ENVIRONMENT:-dev}
namespace: food-delivery
spec:
# https://doc.traefik.io/traefik/routing/entrypoints/
entryPoints:
- mongo
routes:
- match: HostSNI(`*`)
services:
- name: mongo-service-${ENVIRONMENT:-dev}
port: ${MONGO_HOST_PORT:-27017}
## needs tls for using domain
# - match: HostSNI(`local-mongo`)
# services:
# - name: mongo-service-${ENVIRONMENT:-dev}
# port: ${MONGO_HOST_PORT:-27017}
# tls:
# passthrough: true
---