This repo is a first look at Azure Kubernetes Service. It includes a step by step guide to configure an AKS cluster hosting an ASP.NET Core 3.0 API and .NET Core 3.0 worker process. The worker process subscribes to messages published by the API via Azure Service Bus using NServiceBus.
The guide will have you:
- Configure an AKS cluster
- Create an Azure Container Registry (ACR) for hosting the solutions docker images
- Build and push the solutions docker images to ACR
- Configure certificate management on the AKS cluster automating SSL Certificate issuing for the API using LetsEncrypt
- Install Azure Pod Identity providing user assigned managed identities to pods allowing applications to authenticate with Azure resources using Managed Identities.
- Create a NGINX Ingress controller for the ASP.NET Core API allowing for layer 7 load balancing features such as TLS termination
- Create an Azure Keyvault instance storing application secrets whereby access is provided via Azure Pod Identity.
Set the Azure Subscription that you'll be working with.
$ az login
$ az account list
$ export SUBSCRIPTIONID="{Subscritpion Id}"
$ az account set --subscription $SUBSCRIPTIONID
Create the resource group for the azure resources being created.
$ export RG="aksdemo-rg"
$ ./scripts/1-init-aks/1-create-rg.sh
Create a new AKS cluster and configure the kubectl cli for connecting to the new cluster.
$ export AKS_NAME="aksdemocluster"
$ ./scripts/1-init-aks/2-create-aks.sh
$ ./scripts/1-init-aks/3-configure-cli.sh
Create a new Azure Container Registry to host the applications docker image.
$ export ACR_NAME="aksdemo001"
$ ./scripts/1-init-aks/4-create-acr.sh
Grant access for AKS to pull docker images from ACR.
$ ./scripts/1-init-aks/5-grant-aks-acr-access.sh
Build the applications docker image and push to Azure Container Registry.
$ ./scripts/2-docker/1-build-docker-images.sh
$ ./scripts/2-docker/2-push-docker-images.sh
Configure a service account for tiller allowing installation of an NGINX ingress controller and cert manager via HELM
NOTE: Additional security should be configured for tiller when running in production. HELM 3 will not require tiller avoiding it's security concerns.
$ ./scripts/3-configure-aks/1-init-tiller.sh
Create the NGINX ingress controller in the "dev" namespace for layer 7 load balancing which we'll use for TLS termination. This will pause waiting for the External IP, Ctrl+C once available and use to configure dns name
$ export NAMESPACE="dev"
$ ./scripts/3-configure-aks/2-nginx-ingress-controller.sh
Configure DNS, pass in the External IP address of the ingress controller
$ export DNS_NAME="aksdemo001"
$ ./scripts/3-configure-aks/3-configure-dns-name.sh {External IP of Ingress Controller}
Install cert manager and use LetsEncrypt for automatic certificate issuing and renewal. You may need to wait for the cert manager to be running before installing the cluster issuer.
$ ./scripts/3-configure-aks/4-install-cert-manager.sh
$ ./scripts/3-configure-aks/5-install-cluster-issuer.sh
Install Pod Identity allowing pods to use a user assigned managed identity to access keyvault and other azure resources.
Details: aad-pod-identity
$ ./scripts/5-pod-identity/1-install-pod-identity.sh
Create the managed identity "values-microservice-identity" to be used by the application to access azure resources
$ ./scripts/5-pod-identity/2-create-azure-identity.sh
Assign Managed Identity Operator the the cluster's service principal
$ ./scripts/5-pod-identity/3-grant-managed-identity-operator-role-to-identity.sh
Install the applications azure identity resource, and role binding to assign the identity to selected pods
TODO: params in yaml cause issues when assigning, hard coding ids with double quotes works.
$ ./scripts/5-pod-identity/4-install-azure-identity.sh
$ ./scripts/5-pod-identity/5-create-azure-identity-binding.sh
Create an Azure KeyVault instance to store application's secrets and assign the applications managed identity access to get and list secrets
$ export KEYVAULT_NAME="aksdemo001-kv"
$ ./scripts/6-deploy-app/1-create-keyvault.sh
$ ./scripts/6-deploy-app/2-assign-azure-identity-roles-keyvault.sh {clientId} {subscriptionId} {principalId}
The values api publishes commands that are subscribed to by the values backend, create the Azure Service Bus namespace and set the connection string in KeyVault.
TODO: Use the managed service identity for servicebus rather than connection string
$ export SB_NAME="aksdemo001sb"
$ ./scripts/6-deploy-app/3-create-servicebus.sh
Deploy the application's K8s resource and create an ingress route allowing access from the internet.
$ ./scripts/6-deploy-app/4-create-app-resources.sh
$ ./scripts/6-deploy-app/5-create-app-ingress.sh
Browse to:
-
https://aksdemo001.australiaeast.cloudapp.azure.com/env
-
https://aksdemo001.australiaeast.cloudapp.azure.com/secrets
-
Send a command via the api to be consumed by the backend worker:
Stream all worker pods in a new shell
$ kubectl logs -f -l apptype=worker -n dev
Send a POST request to the command endpoint and view the handled command in the worker's log stream
$ curl -d '["value1"]' -H 'Content-Type: application/json' -X POST https://aksdemo001.australiaeast.cloudapp.azure.com/command
-
Scale the api
$ kubectl scale --replicas=6 deployment/values-microservice-deployment -n dev
-
Delete api pods
$ kubectl delete pods -l ms=values -n dev $ kubectl get pods -n dev --watch
-
Get pods logs in dev namespace
$ kubectl get pods -n dev $ kubectl logs {pod name} -n dev
-
Confirm LetsEncrypt certificate issued
$ kubectl describe certificate tls-secret -n dev
-
When pushing docker images fails with "error creating overlay mount to /var/lib/docker/overlay2/../merged: device or resource busy" try limiting the number of parallel uploads to 1 as a work around i.e. using the following and then running docker push again.
$ sudo systemctl stop docker $ sudo nano /etc/docker/daemon.json { "max-concurrent-uploads": 1 } $ sudo systemctl start docker
-
Get aad pod identity assigned identities
$ kubectl get AzureAssignedIdentities --all-namespaces
-
aad-pod-identity mic does not always assign the azure identity before application startup, in most cases after a crashloop or two the identity is assigned an application starts up. See Azure/aad-pod-identity#279 for example. To observe this behaviour:
Tail the mic elected leader in a new shell
$ kubectl get pods # get the pod name of the mic elected leader $ kubectl logs -f {mic elected leader pod name}
In a new shell delete the deployment
$ kubectl delete deployment values-microservice-deployment -n dev $ kubectl delete deployment values-backend-deployment -n dev $ kubectl delete svc values-microservice-svc -n dev
In the mic log tail, notice the identity binding being removed.
Re-create the deployment
$ ./scripts/6-deploy-app/3-create-app-resources.sh $ kubectl get pods -n dev --watch
In the mic log tail you'll see the assigning of the identity, notice in some cases there are restarts before successfully starting the pod