Skip to content

Latest commit

 

History

History

simple-kubernetes-with-ingress

Starting working with kubernetes and ingress

This just serves as an example of how to work with ingress on a local kubernetes.

Make sure your are in the right place in your file system, switch to the simple-kubernetes-with-ingressfolder

Create the cluster

We will use a kind cluster in this excercise, because it is easy to construct on your local machine and very easy to tear down as well. Kind builds on docker and thus you should have docker installed an running.

$ kind create cluster --name ingress --config=config.yaml

Install ingress controller

A Kubernetes ingress controller is a reverse-proxy or layer 7 load balancer, that is capable of handling network traffic from outside of the cluster and route it to the pods inside of the cluster. Kubernetes contains an ingress resource type to specify ingress configuration, but no ingress controllers are installed by default, as they may be specific for the infrastructures. Here we are running on a local machine and we will use nginx as ingress controller for this workshop example.

To see what is actually contained in the definition of an ingress controller and install it:

$ cat ingress-controller.yaml 
$ kubectl create -f ./ingress-controller.yaml

Wait for deployment to be ready.

$ kubectl wait pod -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/component=controller --for condition=Ready  --timeout=45s

In the remaining parts of the workshop we will just use the default namespace for making it easier to write the commands you need to write.

Install the applications

Take a look at the applications, which are basically two deployments with one replica (pod) each and one deployment with 4 replicas.

$ cat deployments.yaml 

Progess to install these two application as:

$ kubectl create -f ./deployments.yaml

Install the services for the applications

Take a look at the services, which are in front of the applications.

$ cat services.yaml 

Then install the services.

$ kubectl create -f ./services.yaml

Install the ingress for the services

Take a look at the local ingress definition for the services, where you will find 3 definitions. Two of these for foo and bar and one to use in a while called baz.

$ cat ingress.yaml 

Lets us install the ingress

$ kubectl create -f ./ingress.yaml

Accessing the Application

Now when you access the application it is possible to use the Ingress Controller and the Ingress to select between the two applications through the two services:

To get the hello-bar application

$ curl localhost/hello-bar/hostname

which takes you through the hello-ingress ingress to the hello-bar-service service to the hello-bar-app pod.

To get the hello-foo application

$ curl localhost/hello-foo/hostname

which takes you through the hello-ingress ingress to the hello-foo-service service to the hello-foo-app pod

To get the hello-baz application

$ curl localhost/hello-baz/hostname

which takes you through the hello-ingress ingress through the hello-baz-service service to the hello-baz-app pods. The hello-baz-app deployment contains 4 replicas. Try calling the hello-baz application several times using:

$ curl localhost/hello-baz/hostname

and see that it returns different names, which informs you that traffic is sent to different pods, i.e., the loadbalancing in the baz service balances across the pods.

This means you now have an application deployed across "nodes" and you receive traffic from "outside" the cluster and that traffic is balancede across "nodes" into one of the instances of the application. You can see that by examininig the names of the pods responding and knowing how these are distributed across "nodes" in the cluster:

$ kubectl get pods -o wide

Or if you want to see the baz app isolated:

$ kubectl get pods -o wide | grep baz

Virtual Hosting

The above examples exposes the applications on the same hostname localhost routing the traffic based on the path of the http request (hello-foo, hello-bar, hello-baz). The same ingress controller can also distinguish requests based on the requested hostname. There is an example of this in the multiple-domains folder - take a look at the ingress resource.

$ cat multiple-domains/ingress.yaml

and apply the specification:

$ kubectl apply -f ./multiple-domains/ingress.yaml

Now check the response when calling using different hostnames:

$ curl foo-127-0-0-1.nip.io/hostname
$ curl bar-127-0-0-1.nip.io/hostname
$ curl baz-127-0-0-1.nip.io/hostname

Note the three domains (foo-127-0-0-1.nip.io, foo-127-0-0-1.nip.io, foo-127-0-0-1.nip.io) all resolve to 127.0.0.1 (i.e., localhost) but the ingress controller will route based on the http host header.

Workshop Intentions

The intentions of this workshop was to convey som initial knowledge on working with kubernetes in a very simple way, the aim being to leave you with a bit of knowledge that will hopefully ignite your interest in kubernetes, as it is possible to work with it on your local machine.

Please tell us your feedback for us to be able to make a better job the next time, so please:

  • what did you find hard in the workshop?
  • what did you find tom easy in the workshop?
  • how do you percieve that the workshop will help you starting to get aquinted with kubernetes going forward?
  • did it ignite your interest for cloud native and kubernetes?
  • what can we improve on the:
    • introductory part?
    • the practice part?
    • the notes and accompanying information?

Thank you

If you wnat to know more about Cloud Native and get together with other people sharing the desire to work with Cloud Native, please get in touch with those people, go to Cloud Native Meetups.