The README shows you a simple way to get started with Contour on your cluster.
This topic explains the details and shows you additional options.
Most of this covers running Contour using a Kubernetes Service of Type: LoadBalancer
.
If you don't have a cluster with that capability see the Running without a Kubernetes LoadBalancer section.
The recommended installation of Contour is Contour running in a Deployment and Envoy in a Daemonset with TLS securing the gRPC communication between them.
The contour
example will install this for you.
A Service of type: LoadBalancer
is also set up to forward to the Envoy instances.
The details of the installation are documented in contour
's README.md
If you wish to use Host Networking please see the appropriate section for the details.
To retrieve the IP address or DNS name assigned to your Contour deployment, run:
$ kubectl get -n projectcontour service contour -o wide
On AWS, for example, the response looks like:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
contour 10.106.53.14 a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com 80:30274/TCP 3h app=contour
Depending on your cloud provider, the EXTERNAL-IP
value is an IP address, or, in the case of Amazon AWS, the DNS name of the ELB created for Contour. Keep a record of this value.
Note that if you are running an Elastic Load Balancer (ELB) on AWS, you must add more details to your configuration to get the remote address of your incoming connections. See the instructions for enabling the PROXY protocol..
On Minikube, to get the IP address of the Contour service run:
$ minikube service -n projectcontour contour --url
The response is always an IP address, for example http://192.168.99.100:30588
. This is used as CONTOUR_IP in the rest of the documentation.
When creating the cluster on Kind, pass a custom configuration to allow Kind to expose port 8080 to your local host:
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
- role: worker
extraPortMappings:
- containerPort: 8080
hostPort: 8080
listenAddress: "0.0.0.0"
Then run the create cluster command passing the config file as a parameter.
This file is in the examples/kind
directory:
$ kind create cluster --config examples/kind/kind-expose-port.yaml
Then, your CONTOUR_IP (as used below) will just be localhost:8080
.
Note: If you change Envoy's ports to bind to 80/443 then it's possible to add entried to your local /etc/hosts
file and make requests like http://kuard.local
which matches how it might work on a production installation.
The Contour repository contains an example deployment of the Kubernetes Up and Running demo application, kuard.
To test your Contour deployment, deploy kuard
with the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,ing -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
po/kuard-370091993-ps2gf 1/1 Running 0 4m
po/kuard-370091993-r63cm 1/1 Running 0 4m
po/kuard-370091993-t4dqk 1/1 Running 0 4m
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/kuard 10.110.67.121 <none> 80/TCP 4m
NAME HOSTS ADDRESS PORTS AGE
ing/kuard * 10.0.0.47 80 4m
... showing that there are three Pods, one Service, and one Ingress that is bound to all virtual hosts (*
).
In your browser, navigate your browser to the IP or DNS address of the Contour Service to interact with the demo application.
To test your Contour deployment with IngressRoutes, run the following command:
$ kubectl apply -f https://projectcontour.io/examples/kuard-ingressroute.yaml
Then monitor the progress of the deployment with:
$ kubectl get po,svc,ingressroute -l app=kuard
You should see something like:
NAME READY STATUS RESTARTS AGE
pod/kuard-bcc7bf7df-9hj8d 1/1 Running 0 1h
pod/kuard-bcc7bf7df-bkbr5 1/1 Running 0 1h
pod/kuard-bcc7bf7df-vkbtl 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kuard ClusterIP 10.102.239.168 <none> 80/TCP 1h
NAME CREATED AT
ingressroute.contour.heptio.com/kuard 1h
... showing that there are three Pods, one Service, and one IngressRoute.
In your terminal, use curl with the IP or DNS address of the Contour Service to send a request to the demo application:
$ curl -H 'Host: kuard.local' ${CONTOUR_IP}
If you can't or don't want to use a Service of type: LoadBalancer
there are other ways to run Contour.
If your cluster doesn't have the capability to configure a Kubernetes LoadBalancer,
or if you want to configure the load balancer outside Kubernetes,
you can change the Envoy Service in the 02-service-envoy.yaml
file to set type
to NodePort
.
This will have every node in your cluster listen on the resultant port and forward traffic to Contour.
That port can be discovered by taking the second number listed in the PORT
column when listing the service, for example 30274
in 80:30274/TCP
.
Now you can point your browser at the specified port on any node in your cluster to communicate with Contour.
You can run Contour without a Kubernetes Service at all.
This is done by having the Contour pod run with host networking.
Do this with hostNetwork: true
on your pod definition.
Envoy will listen directly on port 8080 on each host that it is running.
This is best paired with a DaemonSet (perhaps paired with Node affinity) to ensure that a single instance of Contour runs on each Node.
See the AWS NLB tutorial as an example.
If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress,
you can specify the annotation kubernetes.io/ingress.class: "contour"
on all ingresses that you would like Contour to claim.
You can customize the class name with the --ingress-class-name
flag at runtime.
If the kubernetes.io/ingress.class
annotation is present with a value other than "contour"
, Contour will ignore that ingress.
To remove Contour from your cluster, delete the namespace:
$ kubectl delete ns projectcontour