Skip to content

Nako/KubernetesWorkshop

 
 

Repository files navigation

Kubernetes Workshop

This repository consists of notes and links regarding Kubernetes with the version of at least 1.18. To create a real workshop from the material presented here, I recommend creating a customized learning path and a presentation. The main source of this repository is the great Udemy workshop Kubernetes for the Absolute Beginners by Mumshad Mannambeth and the Docker Mastery: with Kubernetes +Swarm from a Docker Captain by Bret Fisher.

Introduction

  • https://kubernetes.io/

  • aka "K8" or "K8s"

  • = Container orchestration = runs on top of Docker and adds a set of APIs to manage containers across servers

  • developed by Google, released in 2015

  • Kubernetes supports other container runtimes than Docker, such as rkt ("rocket") or CRI-O

  • a lot of distributions like Rancher, Red Hat OpenShift, VMWare Tanzu, Amazon Elastic Kubernetes Service (EKS) - all based on the original GitHub version of Kubernetes

Kubernetes Components

  • official documentation

  • Node

    • = machine (physical or virtual)

    • has kubelet-agent on it so can interact with master (delivering health information to master, receiving commands from master) and also kube-proxy

  • Cluster

    • = set of nodes

  • master = control plane

    • = multiple nodes that manage the other nodes in the cluster

    • has kube-apiserver on it so can be interacted with via command line

    • has etcd key-value-store on it to store all data used to manage the cluster

    • has controller-manager and scheduler (for distributing work or containers across multiple nodes) on it to work with worker nodes

  • kubectl = "kube control" = CLI

    • used to deploy and manage applications on a Kubernetes cluster

Install kubectl

Options for Using Kubernetes

To be able to use kubectl, a connection to a cluster has to be established.

Remotely via browser

"Play with Kubernetes"

Katacoda

KodeKloud

Local Installation

  • Kind

  • Minikube

    • "Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop for users looking to try out Kubernetes or develop with it day-to-day."

  • microk8s

    • "A single package of k8s for 42 flavours of Linux. Made for developers, and great for edge, IoT and appliances."

Hosting at Cloud Providers

  • install K8 yourself at Google Cloud Platform, AWS or Azure or use services such as EKS

Connect kubectl to cluster

  • after seting up a local cluster with the options above or creating a remote-running cluster, kubectl has to connect to this cluster, see Accessing Clusters in Kubernetes documentation

yaml Template Files

  • Resources in Kubernetes created with yaml templates, consisting of the following 4 elements

apiVersion

  • list all api-versions with

kubectl api-versions

kind

  • list resources with

kubectl api-resources
  • use values in column "KIND" in yaml-files

metadata

  • only name is required

spec

  • list all resource types with

kubectl explain services --recursive
  • show specs for kind service with

kubectl explain services.spec
  • this also allows digging deeper with

kubectl explain deployment.spec.template.spec

Pods

  • applications don’t get installed on nodes directly, instead get wrapped in pods

  • pod = single instance of an application; smallest creatable object in K8

  • scaling = creating new pods on either existing or new nodes

  • (multiple different) containers can live inside a pod

  • but: one specific application can not have multiple instances in a pod!

  • for example: one pod can hold several different applications, but not two of the same kind

  • containers inside a pod can talk to each other via localhost and share same storage

Defining Pod via CLI

  • disclaimer: defining resources via CLI is discouraged - resources should be created via yaml templates

  • a simple pod named mynginx which downloads the nginx image and runs it can be created with:

kubectl run mynginx --image nginx

HOWEVER, a single pod should not be created by itself manually. Instead, a deployment should be created with:

kubectl create deployment mynginx --image nginx
  • list of pods:

kubectl get pods
  • list of nodes:

kubectl get nodes
  • get more information about pods:

kubectl describe pod mypodname
  • get table with pods with IP and which node they run in:

kubectl get pods -o wide
  • get all resources:

kubectl get all

The last command demonstrates that with creating a deployment, several objects have been created:

  • A pod (with the actual container running in it) wich is wrapped by …​

  • a replica set and …​

  • a deployment that manages replica sets.

All the formerly created objects can be deleted with

kubectl delete deployment mynginx

Defining Pod via yaml

  • Kubernetes' definition file always includes four required fields:

    • apiVersion

    • kind

    • metadata

    • spec

  • example definition file:

pod-definition.yml
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
  labels:
    app: myapp
    type: front-end
spec:
  containers:
    - name: nginx-container
      image: nginx

    - name: backend-container
      image: redis
kubectl apply -f pod-definition.yml
  • apiVersion = version of Kubernetes API to create object. Some Kinds with its versions:

    • POD ⇒ v1

    • Service ⇒ v1

    • ReplicaSet ⇒ apps/v1

    • Deployment ⇒ apps/v1

  • important:

    • under metadata, only certain values are allowed

    • under labels also custom values are allowed

  • spec = "what is inside the pod"; different depending on what kind is created (if kind = "Pod", then spec includes containers)

Inspecting Deployment Objects

  • (as seen above), list instances of objects with

kubectl get pods
kubectl get nodes
kubectl get all
  • get has a watch-mode which means it will add a new line when new information becomes available:

kubectl get pods -w
  • get information about a specific pod:

kubectl describe pod myapp-pod
  • see logs of a specific pod:

kubebctl logs deployment/mynginx
kubebctl logs deployment/mynginx --follow
kubebctl logs deployment/mynginx --tail 3
  • see logs of multiple pods needs a common label of all these logs, for example name of deployment:

kubectl logs -l run=my-deployment

Replica Set

  • "replication controller" != "replica set"! "Replication controller" deprecated, replaced by replica set.

  • main task of replica set: "specified number of pods should be running!"

  • Replica set can be created directly and scaled like shown below. However, it’s supposed to be managed by a deployment instead, via a yaml-file

  • creating replica set directly (not recommended!):

replicaset-definition.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: myapp-replicaset
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
        - name: nginx-container
          image: nginx
  replicas: 1
  selector:
    matchLabels:
      type: front-end
  • specselector necessary because replica sets can also manage pods that are not part of the original creation of the replica set (because they already existed, for example)

  • create with:

kubectl create -f replicaset-definition.yml
  • get replica sets:

kubectl get replicaset
  • replica sets monitor those pods whose labels-definition match the machtLabels in the selector ⇒ multiple replica sets can monitor huge number of pods

  • background of template-section in replicaset-definition-file: is duplicate of pod-definition. However useful because replica set supposed to create new pods, even when sufficient number of pods exist at startup of replica sets

  • updating replica-set to run more than the specified number of replicas:

    • update definition file

    • then run:

kubectl replace -f replicaset-definition.yml
  • alternative way:

kubectl scale --replicas=6 -f replicaset-definition.yml
  • or, by providing type and name of replica set instead of definition file:

kubectl scale --replicas=6 replicaset myapp-replicaset
  • testing if replica set really brings back crashed pods, delete one pod - it should be back soon:

kubectl delete pod mycreatedpod
  • Attention: Pods created with the same label as pods in a replica set will be deleted automatically because this label is managed by replica-set!

  • Note: Creating and scaling replica sets manually is not the preferred way of managing a cluster! The way to go are deployments, via yaml-files (see below).

Deployments

  • aspects of deploying in cloud production environment:

    • many instances of app running

    • rolling updates: upgrading instances not all at once but after another so access to app is granted at all times

    • rollback changes in case of errors

    • apply set of changes to environment as a set, not as single changes

    • Conceptional, "deployment" in Kubernetes contains "Replica Set" which contain "Pods".

  • definition is exactly similar to definition of replica set, except for kind:

deployment-definition.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
  labels:
    app: myapp
    type: front-end
spec:
  template:
    metadata:
      name: myapp-pod
      labels:
        app: myapp
        type: front-end
    spec:
      containers:
        - name: nginx-container
          image: nginx
  replicas: 1
  selector:
    matchLabels:
      type: front-end
kubectl create -f deployment-definition.yml
  • get replica sets:

kubectl get deployments

Updates and Rollback

  • if deployment is executed because of version change, rollout is triggered which creates a new deployment revision

  • view state of rollout:

kubectl rollout status deployment/myapp-deployment
  • view history of rollouts:

kubectl rollout history deployment/myapp-deployment
  • history list per default not very verbose, see https://blenderfox.com/2018/06/23/using-the-change-cause-kubernetes-annotation-as-a-changelog/

  • 2 types of deployment strategies:

    • recreate: first destroy all instances, only then create new instances → downtime!

    • rolling update take down older version and bring up new one, one by one (default)

  • performing updates:

    1. adapt deployment-definition-file

    2. kubectl apply -f deployment-definition.yml --record

    3. kubectl rollout status deployment/myapp-deployment

  • flag record will fill the CHANGE-CAUSE-column when running kubectl rollout history

  • rolling update is done by creating new replica set first, then taking down pods from the old replica set and creating them in the new replica set

  • rollback to previous revision by:

kubectl rollout undo deployment/myapp-deployment

Deleting Resources

  • delete everything in folder with yaml-files:

$ kubectl delete -f .
  • can be reversed with

$ kubectl apply -f .

Network

  • nodes have IP addresses because they are physical machines

  • also, nodes are given a range of IP-addresses to assign them to the pods running inside the nodes

  • IP addresses for container concepts:

    • in Docker, each container gets an IP address

    • in Kubernetes, each pod gets an IP address

  • all pods on a node are in a virtual network and can reach each other through this network

  • however, cluster consisting of multiple nodes run into problems because Kubernetes doesn’t set up routing between nodes

  • solution only via external solutions like cisco, flannel, cilium

Services

  • in Kubernetes, nodes and thereby pods are ephemeral and can be assigned new IPs all the time, hence reaching them directly from outside is impossible

  • services = way of making things inside the cluster available from outside; provide stable address for pods

  • types of services:

    • ClusterIP

      • default

      • single, internal virtual IP

      • only reachable from within cluster (from other nodes and pods)

    • NodePort

      • for communication from outside the cluster to the nodes in the cluster, using the actual IPs of the objects in the cluster

    • LoadBalancer

      • for traffic coming in from the outside

      • often through cloud provider like AWS ELB

    • External Name

      • for when objects in the cluster need to talk to the outside world

      • adds CNAME DNS record to CoreDNS

  • Great explanation of Kubernetes on YouTube with nice visualizations

ClusterIP

  • provides single, internal IP with a port that itself is accessible at

  • No access to service from outside! For that, additional ingress!

  • ingress targets service for specific requests and forwards them to this service

  • targeting of service by ingress done by name of the service

  • ClusterIP-service may also be targeted by pods running in the cluster, for example a backend trying to reach the database

Create ClusterIP Service via yml

clusterip-service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: back-end
spec:
  selector:
    app: myapp
    type: back-end
  ports:
    - port: 80
      targetPort: 80
  type: ClusterIP
  • requests landing at service are forwarded to one of the pods that have all the labels referenced in selector

  • pods that get traffic from a service = services "endpoints"

  • selector = key-value-pairs, free to choose

  • port = port the service listens to for requests to forward (multiple ports can be opened by adding more entries in the ports-list)

  • targetPort = port of pod that request will be send to by service

Multi-Port Service
  • service exposing more than one port has to name the entries in the ports-list:

apiVersion: v1
kind: Service
metadata:
  name: back-end
spec:
  selector:
    app: myapp
    type: back-end
  ports:
    - name: web
      port: 80
      targetPort: 80
    - name: mongodb
      port: 27017
      targetPort: 27017
  type: ClusterIP

Create ClusterIP Service via CLI

  • creating a deployment with some nodes first:

kubectl create deployment httpenv --image=bretfisher/httpenv
kubectl scale deployment/httpenv --replicas=5
kubectl expose deployment/httpenv --port 8888
  • default type for kubectl expose is ClusterIP; in the examples below, a specific type is given as a parameter to create other kinds of services

Reaching ClusterIP Service

  • remember: localhost:8888 can not be reached from the host; the exposed port is only available from inside the cluster! However, on Linux, it can be reached by:

curl [ip of service]:8888
  • IP of service can be seen with

kubectl get service

NodePort

  • NodePort service is accessible on a static port of each worker node in the cluster

  • comparison with ClusterIP service:

    • ClusterIP is only available within the cluster

    • NodePort opens a fixed port on each worker node to the outside

  • with NodePort possible: direct communication from browser to a specific worker node within the cluster on a given port

  • three ports involved, named from the viewpoint of the server:

    • port on pod where application is running = target port

    • port on service itself = "port"

    • port on the node = node port (used to access node from externally) → valid range: 30000 - 32767

  • creating a NodePort service will automatically create a ClusterIP service for the port

  • because NodePort will open every worker node to the public, this is not a secure option

Create NodePort Service via yml

service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: NodePort
  ports:
    - targetPort: 80
      port: 8080
      nodePort: 30008
  selector:
    app: myapp
    type: front-end
  • with above configuration, the external browser can call [node-ip]:30008, is then forwarded to the automaticaly created ClusterIP service’s port 8080 which forwards to the pod’s port 80.

  • connection between service and pod via labels

  • creating service:

kubectl create -f service-definition.yml
  • viewing service:

kubectl get services
  • with above definition, running application accessible via IP of worker-node plus designated port (IP of node may differ from this example)

  • attention: unlike in Docker, the order of the ports is reversed: 8888:32334/TCP means "8888 inside the cluster, 32334 host" (host port is determined automatically)

curl 192.168.1.2:30008
  • often, multiple pods on multiple nodes running with same labels and same application

    • NodePort-service created as above will automatically balance load between all pods = built-in load balancer

Create NodePort Service via CLI

kubectl expose deployment/httpenv --port 8888 --name httpenv-np --type NodePort

LoadBalancer

  • normally, load balancer has to be provided by external infrastructure like AWS ELB

  • however, Docker Desktop provides an out-of-the-box load balancer for Kubernetes

  • publishes the --port on localhost

  • creating a LoadBalancer service will automatically create a NodePort and a ClusterIP service

  • if a load balancer is used, no ingress has to be created

Create LoadBalancer Service via yml

loadbalancer-service-definition.yml
apiVersion: v1
kind: Service
metadata:
  name: load-balancer-service
spec:
  selector:
    app: myapp
    type: front-end
  ports:
    - port: 80
      targetPort: 80
  type: LoadBalancer
kubectl get services

Create LoadBalancer Service via CLI

kubectl expose deployment/httpenv --port 8888 --name httpenv-lb --type LoadBalancer

curl localhost:8888

Best Practice for Services

The preferred way to expose a service externally is using a ClusterIP service plus ingress.

Ingress

  • manages external access to the services in a cluster

  • requires an ingress controller like NGinX or Traefik installed on Kubernetes cluster

  • each ingress must refer to a service

NetworkPolicy

  • = virtual firewall rules for control how groups of pods communicate to each other and other network endpoints

Generators

  • many commands don’t need every argument

  • missing arguments resolved using templates called generators

  • every resource in Kubernetes has a specification that can be output with --dry-run -o yaml:

kubectl create deployment sample --image nginx --dry-run -o yaml
  • above is a client-side dry-run which ignores resources already created server-side

  • server-side dry-run, acknowledging all exiting resources:

kubectl apply -f app.yml --server-dry-run
  • see diff visually with

kubectl diff -f app.yml

Complex Example: Minimal stateless application

  • (an old version of this course included see this github repo, which is a fork of the repo used in one of the Udemy courses)

The following example is the minimal set of resources needed for a simple, stateless application.

The following files can be found in the folder of the deployment for complex-example-minimal-stateless-application:

├── app
│   ├── Dockerfile
│   ├── index.html
│   └── readme.adoc
└── kubernetes
    ├── deployment-definition.yml
    ├── ingress.yml
    ├── network-policy.yml
    └── service.yml

A httpd as a Simple Example Application

The "application" that should be deployed lives in app and consists only of an httpd server that serves a modified index.html, as can be seen here:

app/Dockerfile
from httpd

COPY index.html /usr/local/apache2/htdocs/
app/index.html
Hello from Steven!

This Docker container can be build with

docker build -t docker.myprivatedockerrepo.eu/cxp/heiter-bis-wolkig-stevens-hello-world .

To run it:

docker container run -p 80:80 --name cxp-hello-world docker.myprivatedockerrepo.eu/cxp/heiter-bis-wolkig-stevens-hello-world

To use it later in the Kubernetes cluster, it should be pushed to a private Docker repository:

docker login

docker push docker.myprivatedockerrepo.eu/cxp/heiter-bis-wolkig-stevens-hello-world

Locally, it can be run with

docker run --rm -p 80:80 docker.myprivatedockerrepo.eu/cxp/heiter-bis-wolkig-stevens-hello-world

Kubernetes Deployment

The 4 files discussed in this section all live in /kubernetes.

The deployment will manage the pods and the replica set. The service will expose the application within the cluster. The ingress will expose the application outside the cluster. The network policy will allow inbound traffic to the application’s pods.

The deployment-definition.yml deploys the application above in two nodes with port 80 exposed:

deployment-definition.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ssc-cxp-demo-deployment
  labels:
    app: stevens-first-kubernetes-app
  namespace: cxp-team-heiterbiswolkig
spec:
  template:
    metadata:
      name: stevens-first-pod
      labels:
        app: stevens-first-kubernetes-app
    spec:
      containers:
        - name: stevens-first-app
          image: docker.myprivatedockerrepo.eu/cxp/heiter-bis-wolkig-stevens-hello-world:latest
          ports:
          - containerPort: 80
      imagePullSecrets:
        - name: regcred
  replicas: 2
  selector:
    matchLabels:
      app: stevens-first-kubernetes-app

The imagePullSecrets references a formerly created secret in Kubernetes that allows pulling the custom image from a private image repository.

The service.yml creates a ClusterIP service (because that is the default when creating a service) that targets the pods with the label "stevens-first-kubernetes-app" and routes port 80 from within the cluster to port 80 of all the nodes:

service.yml
apiVersion: v1
kind: Service
metadata:
  name: ssc-cxp-demo-service
  namespace: cxp-team-heiterbiswolkig
spec:
  selector:
    app: stevens-first-kubernetes-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

This network-policy.yml allows inbound traffic to all pods matching the given matchLabels:

network-policy.yml
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: ssc-cxp-demo-network-policy
  namespace: cxp-team-heiterbiswolkig
spec:
  podSelector:
    matchLabels:
      app: stevens-first-kubernetes-app
  ingress:
  - {}

The ingress.yml is specified for a specific host and path(s) and routes to the formerly created service:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ssc-cxp-demo-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  rules:
  - host: insert.your.host.here
    http:
      paths:
      - path: /cxp-team-heiterbiswolkig/ssc-cxp-demo(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: ssc-cxp-demo-service
            port:
              number: 80

Labels and Annotations

  • in yaml in the metadata section, resources can be labeled with lists of key and value

  • some labels like matchLabels in services are non-optional and link resources to each other, for example services to pods with the same label

  • however, also custom labels possible

  • custom labels important for identifying resources, for example tier: frontend, app: api, env: prod, customer:my-customer

  • not meant to hold complex, large or non-identifying info, which is what annotations are for

  • usage example filtering:

kubectl get pods -l app=nginx
  • usage example applying only matching labels:

kubectl apply -f myfile.yaml -f app=nginx

Storage

  • recommendation in general: use databases as managed services from cloud provider!

StatefulSet

  • if stateful containers have to run in Kubernetes, use StatefulSets

  • = resource for making pods more long-lived

  • manages deployment and scaling of a set of pods so that they are more predictable and can be used to persist data

PersistentVolumeClaim

  • = claim for storage on a persistent volume by a stateful set or deployment

  • persistent volume claims are not deleted when associated stateful set or deployment is uninstalled from cluster = data outlives nodes

PersistentVolume

  • = piece of storage that can be added as a resource to the cluster

  • have their own lifecycles, independent of cluster

  • hide implementation of actual storage and can be AWS EBS or AWS EFS

  • PersistentVolumes are never handled directly, only via PersistentVolumeClaims

Namespaces

  • different namespaces act as totally independent and non-connected clusters

  • limit scope

  • a.k.a. "virtual clusters"

  • not related to Docker/Linux namespaces

  • create a namespace:

namespace.yml
apiVersion: v1
kind: Namespace
metadata:
  name: mynamespace
  labels:
    app.kubernetes.io/name: ${namespaceName}
kubectl create -f namespace.yml
  • get information about namespaces:

kubectl get namespaces
kubectl get all --all-namespaces
  • for every command that should be executed in the namespace, "-n" has to be added, for example:

kubectl -n mynamespace create -f .
  • if no namespace argument is given, the command is executed for namespace "default"

  • "default" should only be used in very simple test scenarios

Highly Available Applications

Deployment Strategy

  • in deployment specification, deployment strategy with options:

    • RollingUpdate (default) = replacing pods one by one

      • requires the application to deal with old and new versions deployed at the same time!

    • Recreate = kill all pods and start anew

HorizontalPodAutoscaler

  • scales number of pods in deployment or stateful set depending on metrics like CPU or memory consumption

  • added as a HPA resource to a deployment

  • HPA Controller checks metrics on each application with an HPA resource every 15 seconds and takes action if necessary

  • creating the HPA resource with :horizontal-pod-autoscaler.yml:

horizontal-pod-autoscaler.yml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: cxp-hello-k8s
  labels:
    app.kubernetes.io/name: cxp-hello-k8s
    app.kubernetes.io/instance: cxp-hello-k8s
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: cxp-hello-k8s
  minReplicas: 2
  maxReplicas: 4
  metrics:
    - type: Resource
      resource:
        name: cpu
        targetAverageUtilization: 80
    - type: Resource
      resource:
        name: memory
        targetAverageUtilization: 80

PodDiscruptionBudget

  • defines how many pods should be running at any given time if the cluster itself is under maintenance

  • if maintenance activity violates budget, Kubernetes refuses to execute this command

  • pod-disruption-budget.yml:

pod-disruption-budget.yml
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: cxp-hello-k8s
  labels:
    app.kubernetes.io/name: cxp-hello-k8s
    app.kubernetes.io/instance: cxp-hello-k8s
spec:
  minAvailable: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: cxp-hello-k8s
      app.kubernetes.io/instance: cxp-hello-k8s
  • when using PodDisruptionBudgets, the replica count should be > 1 !

Affinity

  • rule, why a pod should (affinity) or should not (anti-affinity) run on a specific worker node

  • can be used for example to spread the application across multiple nodes and even availability zones (in AWS) or to make sure that the database runs on the same node as the backend

Security

PodSecurityContext

  • part of pod template

  • describes privilege and access control settings of a pod or container

  • deployment manifest with pod security context:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: cxp-hello-k8s
  labels:
    app.kubernetes.io/name: cxp-hello-k8s
    app.kubernetes.io/instance: cxp-hello-k8s
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: cxp-hello-k8s
      app.kubernetes.io/instance: cxp-hello-k8s
  template:
    metadata:
      labels:
        app.kubernetes.io/name: cxp-hello-k8s
        app.kubernetes.io/instance: cxp-hello-k8s
    spec:
      securityContext:
        runAsUser: 1000
        runAsGroup: 1000
        fsGroup: 1000
      containers:
        - name: cxp-hello-k8s
          image: "docker.myprivatedockerrepo.eu/cxp/cxp-hello-k8s:1.0.0"
          imagePullPolicy: IfNotPresent
          # [..]
          securityContext:
            allowPrivilegeEscalation: false
  • Always run as non-root user (runAsNonRoot == true)

  • Always specify a non-root user as runAsUser

  • Always specify a specific group as runAsGroup (if not set actual group will be 0!)

  • Always set allowPrivilegeEscalation to false

An example for how to run Apaches httpd as non-root user can be found here

PodSecurityPolicy

  • enforces a set of security policies for pod on cluster level so that pods that do not apply to these rules cannot be run

Role Based Access Control (RBAC)

Service Account

  • identity for processes running in pods

  • processes inherit roles or cluster roles given to the service account

  • all access to the Kubernetes API from a pod running with a service account will be checked against granted policies

  • service account bound to the namespace

  • every namespace has a service account called "default"

$ kubectl get serviceaccounts
NAME      SECRETS   AGE
default   1         13d

Configuring Applications

  • configuration values should not live inside cloud native applications but be passed to them

  • values added in yaml-files, source can be:

    1. directly passed

    2. config maps

    3. secrets

Passing Values Directly

apiVersion: v1
kind: Pod
metadata:
  name: envar-demo
  labels:
    purpose: demonstrate-envars
spec:
  containers:
  - name: envar-demo-container
    image: gcr.io/google-samples/node-hello:1.0
    env:
    - name: DEMO_GREETING
      value: "Hello from the environment"
    - name: DEMO_FAREWELL
      value: "Such a sweet sorrow"

Config Map

    env:
    # Define the environment variable
    - name: SPECIAL_LEVEL_KEY
      valueFrom:
        configMapKeyRef:
          # The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
          name: special-config
          # Specify the key associated with the value
          key: special.how
  • a config map is a dedicated Kubernetes resource separate from the application

  • hence, decoupling configuration from application

  • map itself is simply a key-value file

Secret

    env:
    - name: POSTGRES_DB_PASSWORD
      valueFrom:
        secretKeyRef:
          name: postgresql-secret
          key: postgresql-password
  • a secret is a dedicated Kubernetes resource

  • = key-value pairs

  • two levels of encryption: secret store is encrypted + values are encrypted

  • secret is read when pod is started

Best Practices and "Tricks"

  • Label all parts (deployments and services) of an application with the name of the application, so that all parts have the same label and can be searched and filtered easily.

  • Complex cluster definitions with multiple files can be easily created with one command by placing all files in one folder and executing the following within that folder:

kubectl create -f .
  • Kubernetes supports three management approaches: imperative via CLI-commands, declarative via yaml-files and some commands that are imperative but use yaml-files. It’s best to only use the purely declarative yaml-files.

kubectl apply -f file.yml
kubectl apply -f my-folder-with-lots-of-yaml/
kubectl apply -f https://my-site.com/my.yml
  • using the purely declarative mode with yaml-files also allows versioning every change with Git (whereas using CLI-commands will not leave a trace to understand what has been done later on)

  • If an application needs repeatedly executed tasks, don’t use a cron job functionality directly in the container of the application. Instead, create another pod for that task. Because the main application can be executed on multiple pods, all of these pods would execute the cron job when it is implemented within the main application.

kubeconfig files

  • kubectl connects to a specific cluster with information from kubeconfig-file

  • kubeconfig usually named ${clusterName}.yaml and located in $Home/.kube or %USERPROFILE%\.kube

  • example kubeconfig-file:

/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS...tLQo=
    server: https://10.119.16.228:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0t...LQo=
    client-key-data: LS0..
  • for cubectl to use this:

export KUBECONFIG=$HOME/.kube/myk8s.yaml

or

set KUBECONFIG=%USERPROFILE%\.kube\myk8s.yaml
  • if multiple cluster are managed, multiple config-files:

export KUBECONFIG=$HOME/.kube/cluster1.yaml:/cxp-ide/data/kubernetes/.kube/cluster2.yaml

or

set KUBECONFIG=%USERPROFILE%\.kube\cluster1.yaml;C:\cxp-ide\data\kubernetes\.kube\cluster2.yaml
  • switch between contexts (name of context in config-file):

$ kubectl config use-context kubernetes-admin@kubernetes
  • get name of current cluster:

$ kubectl config current-context
  • get information about all contexts:

$ kubectl config view
  • note: AWS EKS uses a different way of authenticating, the aws-iam-authenticator

Tools

Lens

General

  • https://k8slens.dev

  • Tool for monitoring and controlling Kubernetes clusters

  • important: select correct namespace!

  • works with most common Kubernetes distributions

  • "View Clusters in Catalog" analyses kubeonfig-file

  • Catalog: view all clusters + "pin" to hotbar on the left

"Lens Metrics"

  • (show in local installation) Settings → Lens Metrics

  • install metrics-stack directly from Lens into cluster

  • installation in namespace "lens-metrics"

  • creates nodes and enables metrics

  • deinstall also from settings dialog, formerly created resources are deleted

SmartTerminal

  • at the bottom as a tab

  • connects in currently selected cluster instead of manual switch

  • in pod overview at the top right corner several buttons for shell in pod and others

Warnings + Error Handling

  • in cluster view: list of issues with cluster

  • create resources directly in Lens via "+" besides SmartTerminal tab or scale deployments (buttons at top right corner) - however not recommended to alter cluster directly in Lens; use charts in version control instead!

Browse Helm-Charts and Install from Repos

  • Add Helm Charts: File → Preferences → Kubernetes → Helm Charts

  • Apps → Charts → deploy resources directly in cluster

  • Attention: only use charts from verified sources!

Extensions

Lens Spaces

  • centralized access to clusters

  • without having to share kubeconfig-file, manage access to cluster via centralized service

  • manage teams and access rights (like access to only specific namespaces)

Kubeadm

  • https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

  • = tool for building Kubernetes clusters

  • prerequisites:

    • master and worker nodes specified

    • Docker installed on each node

    • Kubeadm installed on each node

    • master node initialized

    • POD network / cluster network between all nodes initialized

    • each worker node joined to master node

Dashboards

Helm

  • https://helm.sh

  • this workshop uses Helm 3

  • great introduction on YouTube

  • Helm is

    • a package manager for ready-to-use sets of Kubernetes resources and

    • a templating engine for abstracting Kubernetes files

  • Helm Chart defines content of a release with templates

  • Chart Values add environment-specific data to the Templates

  • changes on the cluster through Helm are called releases

Most Relevant Commands

  • helm create CHARTNAME - creates a scaffold chart

  • helm install - install a chart as a new release on a Kubernetes cluster

  • helm upgrade - updates an existing installed release on a Kubernetes cluster

  • helm uninstall - removes an existing release from a Kubernetes cluster

  • helm list - list all existing releases on a Kubernetes cluster

  • helm status - show status of existing release

  • helm dependency update - updates all dependencies to referenced subcharts of chart

Walkthrough

$ helm create ssc-cxp-demo-helm-chart

This will create a new directory with content:

ssc-cxp-demo-helm-chart
|- charts
|- templates
|  |- test
|  |  |- test-connection.yaml
|  | _helpers.tpl
|  | deployment.yaml
|  | hpa.yaml
|  | ingress.yaml
|  | NOTES.txt
|  | service.yaml
|  | serviceaccount.yaml
| .helmignore
| Chart.yaml
| values.yaml

Verify modified Helm chart:

$ helm lint ssc-cxp-demo-helm-chart

Install Helm chart as a new release:

$ helm install ssc-cxp-demo-helm-chart ssc-cxp-demo-helm-chart --namespace cxp-team-heiterbiswolkig --debug --atomic
  • first argument in command above is name of chart, second is directory with helm chart files in it, which are both "ssc-cxp-demo-helm-chart" in this example

Debugging creation of new release with command line arguments:

  • --debug: displays the rendered Kubernetes manifests used to install your release (useful to troubleshooting!)

  • --atomic: waits for all pods to become ready within a certain period of time (5 minutes by default); automatically rolls back on timeout

  • --dry-run: simulates the install without actually performing it

See status with

$ helm list --namespace cxp-team-heiterbiswolkig
$ helm status ssc-cxp-demo-helm-chart --namespace cxp-team-heiterbiswolkig

Upgrade existing Helm release:

$ helm upgrade ssc-cxp-demo-helm-chart ssc-cxp-demo-helm-chart --namespace cxp-team-heiterbiswolkig --debug --atomic
  • upgrades will only change the resources that have been changed instead of restarting all resources

  • in a pipeline, "upgrade --install" should be used so that non-existing deployments will be installed without differentiating between the "install"- and "upgrade"-commands

Uninstall existing Helm release:

$ helm uninstall ssc-cxp-demo-helm-chart --namespace cxp-team-heiterbiswolkig --debug

Tooling

  • Use Lens for managing Kubernetes resources

  • hint: after installing, press "skip" at the "Lens Spaces" Login-Page - no account required for using Lens!

  • The cluster created with Kind in WSL Ubuntu can be added to Lens by adding /home/your-user-name/.kube/config - file in Lens. To find it in Windows, execute "explorer.exe ." in Ubuntu. This will open a file explorer with the path of your current Ubuntu location.

Best Practices

  • When using Helm, don’t change the installation with kubectl! Instead, re-deploy changed Helm chart.

  • debugging non-functional deployments only viable by analyzing with Lens while deploying - Helm cannot give useful error messages in case of non-functional Helm configuration

  • secrets should not be in the values.yaml, but be passed as argument for "helm upgrade"

  • stages are managed by having a values-dev.yaml with only the values that should be overriden in values.yaml:

$ helm upgrade ... -values values-dev.yaml
  • the command above takes the normal values.yaml but also values-dev.yaml and overrides all given values

Reusing Charts

  • charts can be shared with community via chart repositories, for example Artifact Hub

  • charts can either be used directly or as sub-charts

Reuse Chart Directly
  • add repo for new chart:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
  • alter values.yaml provided by chart

  • run helm install:

helm install cxp-postgres-direct bitnami/postgresql --namespace ${namespaceName} --values values.yaml --debug --atomic
Wrap Shared Chart as Sub-Chart
  • add repo for dependency-chart:

$ helm repo add bitnami https://charts.bitnami.com/bitnami
  • create wrapper-chart:

$ helm create cxp-postgres-wrapped
  • remove all files except chart.yaml, values.yaml, _helpers.tpl and .helmignore

  • add sub-chart as dependency in chart.yaml:

chart.yaml
apiVersion: v2
name: cxp-postgresql-wrapped
description: A wrapper chart for PostgreSQL
version: 0.1.0
appVersion: 11.16.0
dependencies:
  - name: postgresql
    version: "8.8.0"
    repository: "@bitnami"
  • download external charts in charts-directory:

$ helm dependency update
cxp-postgresql-wrapped
|- charts
|  | postgresql-8.8.0.tgz
| .helmignore
| Chart.yaml
| values.yaml
  • add values to sub-charts through own values.yaml: own values on top-level far left, values for dependencies under the name of the dependency:

# ----------------------------------
# your own chart's values
# ----------------------------------
someValue: "123"
# [..]

# ----------------------------------
# PostgreSQL subchart values
# ----------------------------------
postgresql:
  image:
    registry: docker.io
    repository: bitnami/postgresql
    tag: 11.7.0-debian-10-r73
  # [..]
  • when to use wrapper: when official chart doesn’t include needed objects, then wrapper-chart can add those objects(like ingresses)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Dockerfile 74.3%
  • HTML 25.7%