Skip to content

Commit

Permalink
add tls support and more reliable service health check
Browse files Browse the repository at this point in the history
  • Loading branch information
Wilson, Dan committed Feb 22, 2016
1 parent ee01df4 commit 0167f5d
Show file tree
Hide file tree
Showing 7 changed files with 137 additions and 30 deletions.
22 changes: 18 additions & 4 deletions continuousdelivery/README.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,19 @@

# Kubernetes Continuous Delivery
Deployment scripts for continuous integration and\or continuous delivery of kubernetes projects. This project was tested and released using a private install of both CircleCI and Jenkins. The core deployments scripts (./deploy/) are used for both systems and as a result are designed to be extensible. Please contribute to add features and support for different CI/CD systems as needed.
Deployment scripts for continuous integration and\or continuous delivery of kubernetes projects. This project was tested and released using a private installs of CircleCI, Jenkins and SolanoCI. The core deployments scripts (./deploy/) are used for all three systems and as a result are fairly robust and compatible for other systems to use. Please contribute to add features and support for different CI/CD systems as needed.

The idea of these scripts was based off of the [docker-hello-google example on circleci repo](https://github.com/circleci/docker-hello-google). Thank you for giving us all a head start!

## Usage

In general, the documentation for scripts is handled inline with comments. You must have a [kubernetes config](http://kubernetes.io/v1.0/docs/user-guide/kubeconfig-file.html) file available and accessible to your build system from a URL. An S3 URL was used in testing. The files from this project should be added to your existing github project (minus the Dockerfile, package.json and server.js that are here just for testing). If you want to make sure your config file is cached an not downloaded with each run then md5sum the config file and update the KUBECHECKSUM variable in circle.yml or jenkins.sh. ~~See build environment setup instructions for Jenkins and CircleCI if you don't currently have an environment setup.~~ <- TODO.
In general, the documentation for scripts is handled inline with comments. You must have a [kubernetes config](http://kubernetes.io/v1.0/docs/user-guide/kubeconfig-file.html) file available and accessible to your build system from a URL. An S3 URL was used in testing. The files from this project should be added to your existing github project (minus the Dockerfile, package.json and server.js that are here just for testing). If you want to make sure your config file is cached an not downloaded with each run then md5sum the config file and update the KUBECHECKSUM variable in circle.yml or jenkins.sh.

You CI build servers need to have docker installed and in the case of Jenkins and CircleCI the docker socket must be accessible from inside docker containers (sudo chmod 777 /var/run/docker.sock). This would be a security issue for a cloud provider but since we're working on our own private CI system here and we trust our own containers this is not a problem. This is critical for getting docker caching to work between builds until docker caching is available between docker daemon restarts. For CircleCI there are a few extra steps that should be a part of your bootstrap scipts.

echo 'DOCKER_OPTS="-g /data/docker"' | sudo tee -a /etc/default/docker
export CIRCLE_SHARED_DOCKER_ENGINE=true

SolanoCI has Docker caching built into their on-premise platform as a part of a beta release. Reach out to them for special instructions for applying that release.

You must have at least one running kubernetes cluster. If you intend to deploy to production install multiple kubernetes clusters and run the deploy command multiple times with the different context names from your kube config file.

Expand All @@ -32,11 +39,18 @@ chmod +x ./jenkins.sh && ./jenkins.sh
6. push changes to github and check the Jenkins job console output for errors\success messages.

## Circle CI
1. Update the circle.yaml environment variables to fit your environment.
1. Update the circle.yml environment variables to fit your environment.
2. Link your project to Circle CI
3. Manually set the docker $dockeruser and $dockerpass environment variables on your CircleCI project. NOTE: going this route so that the credentials are not stored in your github account.
4. Run a build.
3. Check the job output for any errors and the deploy script output for the proxy api endpoint to hit your service for any manual testing.
5. Check the job output for any errors. The deploy script output prints the api proxy endpoint to hit your service for any manual testing and a link to kibana.

## Solano CI
1. Update the solano.yml environment variables to fit your environment.
2. Link your project to SolanoCI
3. Manually set the docker $dockeruser and $dockerpass environment variables on your solano project using the solano config:add command. NOTE: going this route so that the credentials are not stored in your github account.
4. Run a build.
5. Check the job output for any errors. The deploy script output prints the api proxy endpoint to hit your service for any manual testing and a link to kibana.

##### Author
Dan Wilson: [email protected]
2 changes: 1 addition & 1 deletion continuousdelivery/circle.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ machine:
DOCKER_REGISTRY: docker-registry.yourcompany.com
# the docker container defaulted to user/project
CONTAINER1: $(tr [A-Z] [a-z] <<< ${CIRCLE_PROJECT_USERNAME:0:8})/$(tr [A-Z] [a-z] <<< ${CIRCLE_PROJECT_REPONAME:0:15}| tr -d '_-')
#https_proxy: https://xxx.xxx.xxx.xxx:8080/ #uncomment if you need to use a proxy to access the kubernetes api

# Customize checkout
# checkout:
Expand All @@ -47,7 +48,6 @@ test:
# run the container and add a label
# do not specify a local port since the docker daemon is shared
- docker run -p 3000 -d --label ${CONTAINER1} ${DOCKER_REGISTRY}/${CONTAINER1}:latest
# - docker search docker-registry.concur.com:4443/danw
# show how to execute a command in your container
# run any commands to test inside\outside of the container here
- npm test
Expand Down
70 changes: 50 additions & 20 deletions continuousdelivery/deploy/deploy-service.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

# $1 = the kubernetes context (specified in kubeconfig)
# $2 = directory that contains your kubernetes files to deploy
# $3 = set to y to perform a rolling update
# $3 = pass in rolling to perform a rolling update

DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
CONTEXT="$1"
Expand All @@ -29,50 +29,80 @@ $DIR/ensure-kubectl.sh

#set config context
~/.kube/kubectl config use-context ${CONTEXT}
~/.kube/kubectl version

#get user password and api ip from config data
export kubepass=`(~/.kube/kubectl config view -o json | jq ' { mycontext: .["current-context"], contexts: .contexts[], users: .users[], clusters: .clusters[]}' | jq 'select(.mycontext == .contexts.name) | select(.contexts.context.user == .users.name) | select(.contexts.context.cluster == .clusters.name)' | jq .users.user.password | tr -d '\"')`
#get user, password, certs, namespace and api ip from config data
export kubepass=`(~/.kube/kubectl config view -o json --raw --minify | jq .users[0].user.password | tr -d '\"')`

export kubeuser=`(~/.kube/kubectl config view -o json | jq ' { mycontext: .["current-context"], contexts: .contexts[], users: .users[], clusters: .clusters[]}' | jq 'select(.mycontext == .contexts.name) | select(.contexts.context.user == .users.name) | select(.contexts.context.cluster == .clusters.name)' | jq .users.user.username | tr -d '\"')`
export kubeuser=`(~/.kube/kubectl config view -o json --raw --minify | jq .users[0].user.username | tr -d '\"')`

export kubeurl=`(~/.kube/kubectl config view -o json | jq ' { mycontext: .["current-context"], contexts: .contexts[], users: .users[], clusters: .clusters[]}' | jq 'select(.mycontext == .contexts.name) | select(.contexts.context.user == .users.name) | select(.contexts.context.cluster == .clusters.name)' | jq .clusters.cluster.server | tr -d '\"')`
export kubeurl=`(~/.kube/kubectl config view -o json --raw --minify | jq .clusters[0].cluster.server | tr -d '\"')`

export kubenamespace=`(~/.kube/kubectl config view -o json | jq ' { mycontext: .["current-context"], contexts: .contexts[]}' | jq 'select(.mycontext == .contexts.name)' | jq .contexts.context.namespace | tr -d '\"')`
export kubenamespace=`(~/.kube/kubectl config view -o json --raw --minify | jq .contexts[0].context.namespace | tr -d '\"')`

export kubeip=`(echo $kubeurl | sed 's~http[s]*://~~g')`

export https=`(echo $kubeurl | awk 'BEGIN { FS = ":" } ; { print $1 }')`

export certdata=`(~/.kube/kubectl config view -o json --raw --minify | jq '.users[0].user["client-certificate-data"]' | tr -d '\"')`

export certcmd=""

if [ "$certdata" != "null" ] && [ "$certdata" != "" ];
then
~/.kube/kubectl config view -o json --raw --minify | jq '.users[0].user["client-certificate-data"]' | tr -d '\"' | base64 --decode > ${CONTEXT}-cert.pem
export certcmd="$certcmd --cert ${CONTEXT}-cert.pem"
fi

export keydata=`(~/.kube/kubectl config view -o json --raw --minify | jq '.users[0].user["client-key-data"]' | tr -d '\"')`

if [ "$keydata" != "null" ] && [ "$keydata" != "" ];
then
~/.kube/kubectl config view -o json --raw --minify | jq '.users[0].user["client-key-data"]' | tr -d '\"' | base64 --decode > ${CONTEXT}-key.pem
export certcmd="$certcmd --key ${CONTEXT}-key.pem"
fi

export cadata=`(~/.kube/kubectl config view -o json --raw --minify | jq '.clusters[0].cluster["certificate-authority-data"]' | tr -d '\"')`

if [ "$cadata" != "null" ] && [ "$cadata" != "" ];
then
~/.kube/kubectl config view -o json --raw --minify | jq '.clusters[0].cluster["certificate-authority-data"]' | tr -d '\"' | base64 --decode > ${CONTEXT}-ca.pem
export certcmd="$certcmd --cacert ${CONTEXT}-ca.pem"
fi

#set -x

#print some useful data for folks to check on their service later
echo "Deploying service to ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/${kubenamespace}/services/${SERVICENAME}"
echo "Monitor your service at ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/kube-system/services/kibana-logging/?#/discover?_a=(columns:!(_source),filters:!(),index:'logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'tag:kubernetes.${SERVICENAME}*')))"
echo "Monitor your service at ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/kube-system/services/kibana-logging/?#/discover?_a=(columns:!(log),filters:!(),index:'logstash-*',interval:auto,query:(query_string:(analyze_wildcard:!t,query:'tag:%22kubernetes.${SERVICENAME}*%22')),sort:!('@timestamp',asc))"

if [ "${ROLLING}" = "rolling" ]
then
if [ "${ROLLING}" = "rolling" ]; then
# perform a rolling update.
# assumes your service\rc are already created
~/.kube/kubectl rolling-update ${SERVICENAME} --image=${DOCKER_REGISTRY}/${CONTAINER1}:latest || true

else

# delete service (throws and error to ignore if service does not exist already)
for f in ${DEPLOYDIR}/*.yaml; do envsubst < $f > kubetemp.yaml; cat kubetemp.yaml; ~/.kube/kubectl delete --namespace=${kubenamespace} -f kubetemp.yaml || true; done
for f in ${DEPLOYDIR}/*.yaml; do envsubst < $f > kubetemp.yaml; cat kubetemp.yaml; echo ""; ~/.kube/kubectl delete --namespace=${kubenamespace} -f kubetemp.yaml || true; done

# create service (does nothing if the service already exists)
for f in ${DEPLOYDIR}/*.yaml; do envsubst < $f > kubetemp.yaml; ~/.kube/kubectl create --namespace=${kubenamespace} -f kubetemp.yaml || true; done
for f in ${DEPLOYDIR}/*.yaml; do envsubst < $f > kubetemp.yaml; ~/.kube/kubectl create --namespace=${kubenamespace} -f kubetemp.yaml --validate=false || true; done
fi

# wait for services to start
sleep 30

# try to hit the api proxy endpoint
curl -k --retry 10 --retry-delay 5 -v ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/${kubenamespace}/services/${SERVICENAME}/

# extra check just to get the status code
STATUSCODE=$(curl -k --silent --output /dev/stderr --write-out "%{http_code}" ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/${kubenamespace}/services/${SERVICENAME}/)
if [ "$STATUSCODE" -ne "200" ]; then
# write output and set to false so the CI system can report a failure
false
fi
COUNTER=0
while [ $COUNTER -lt 30 ]; do
let COUNTER=COUNTER+1
echo Service Check: $COUNTER
STATUSCODE=$(curl -k --silent --output /dev/stdnull --write-out "%{http_code}" $certcmd ${https}://${kubeuser}:${kubepass}@${kubeip}/api/v1/proxy/namespaces/${kubenamespace}/services/${SERVICENAME}/)
echo HTTP Status: $STATUSCODE
if [ "$STATUSCODE" -eq "200" ]; then
break
else
sleep 10
false
fi
done
6 changes: 3 additions & 3 deletions continuousdelivery/deploy/ensure-kubectl.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

# used to install kubectl inside the build environment plus other tools these scripts leverage.
# uncomment for troubleshooting if required
# set -xv
#set -x

PKG_MANAGER=$( command -v yum || command -v apt-get ) || echo "Neither yum nor apt-get found"

Expand Down Expand Up @@ -49,11 +49,11 @@ if [ ! -e ~/.kube ]; then
fi

if [ ! -e ~/.kube/kubectl ]; then
wget https://storage.googleapis.com/kubernetes-release/release/v1.0.6/bin/linux/amd64/kubectl -O ~/.kube/kubectl
wget https://storage.googleapis.com/kubernetes-release/release/v1.1.4/bin/linux/amd64/kubectl -O ~/.kube/kubectl
chmod +x ~/.kube/kubectl
fi

if md5sum -c - <<<"${KUBECHECKSUM} `ls ~/.kube/config`"; then
if (echo "${KUBECHECKSUM} `ls ~/.kube/config`" | md5sum -c -); then
echo kubeconfig checksum matches;
else
wget ${KUBEURL} -O ~/.kube/config;
Expand Down
4 changes: 4 additions & 0 deletions continuousdelivery/jenkins.sh
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ export DOCKER_HOST=unix:///var/run/docker.sock
export DOCKER_REGISTRY=docker-registry.yourcompany.com
# the docker container defaulted to job/branch for jenkins
export CONTAINER1=$(tr [A-Z] [a-z] <<< ${JOB_NAME:0:8})/$(tr [A-Z] [a-z] <<< ${GIT_BRANCH:0:15}| tr -d '_-' | sed 's/\//-/g')
#export https_proxy=https://xxx.xxx.xxx.xxx:8080/ #uncomment if a proxy needed to access kubernetes api

#login to docker repo
#dockeruser and dockerpass are coming from a jenkins credential in this example
Expand All @@ -60,6 +61,9 @@ chmod +x ./deploy/deploy-service.sh && ./deploy/deploy-service.sh ${KUBECONTEXTQ
#put integration tests here
echo "put integration tests here"

#uncomment to force update of kubectl client
#rm ~/.kube/kubectl

#deploy to production cluster
./deploy/deploy-service.sh ${KUBECONTEXTPROD} ${KUBEDEPLOYMENTDIR}

Expand Down
4 changes: 2 additions & 2 deletions continuousdelivery/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
"name": "json-lint-express",
"preferGlobal": true,
"version": "0.0.1",
"author": "<me@concur.com>",
"author": "<danw@concur.com>",
"description": "a hello world nodejs app.",
"license": "MIT",
"license": "Apache License, Version 2.0",
"engines": {
"node": ">=0.10"
},
Expand Down
59 changes: 59 additions & 0 deletions continuousdelivery/solano.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
system:
docker: true

environment:
###!!! variable expansion is not available in solanoci so dynamic vars are handled as a special case by forcing evaluation in each build\test step !!!###
# a url that your ci system can hit to pull down your kube config file
KUBEURL: http://
KUBECHECKSUM: a1e27f4bfad4df1de8f9a4662223dac7
# contexts from your kubeconfig file that are used for deployment
KUBECONTEXTQA: aws_kubernetes
KUBECONTEXTPROD: aws_kubernetes2
# update this to the directory where your yaml\json files are for kubernetes relative to your project root directory

KUBEDEPLOYMENTDIR: ./kubeyaml
BUILD: ${TDDIUM_TEST_EXEC_ID}
#BUILD is set to the following value in deploy-service.sh: ${TDDIUM_TEST_EXEC_ID}
# used for interpod and interservice communication
# Must be lowercase and <= 24 characters

SERVICENAME: '$(git config user.email | awk -F@ ''{print substr($1,1,8)}'' | tr [A-Z] [a-z])-$(echo ${TDDIUM_REPO_ROOT} | awk -F/ ''{print "s" substr($NF,1,14)}'' | tr -d ''_-'' | tr [A-Z] [a-z])'

# the docker repo
DOCKER_REGISTRY: docker-registry.yourcompany.com

# the docker container defaulted to user/project
CONTAINER1: '$(git config user.email | awk -F@ ''{print substr($1,1,8)}'' | tr [A-Z] [a-z])/$(echo ${TDDIUM_REPO_ROOT} | awk -F/ ''{print "s" substr($NF,1,14)}'' | tr -d ''_-'' | tr [A-Z] [a-z])'

timeout_hook: 900
cache:
save_paths:
- "HOME/.kube"

hooks:
pre_setup: |
eval BUILD=${BUILD}
eval SERVICENAME=${SERVICENAME}
eval CONTAINER1=${CONTAINER1}
chmod +x ./deploy/ensure-kubectl.sh
./deploy/ensure-kubectl.sh ${KUBEURL}
set +x
echo ${DOCKER_REGISTRY}/${CONTAINER1}
sudo docker build -t ${DOCKER_REGISTRY}/${CONTAINER1} .
pre: npm install

nodejs:
version: '0.10.31'

tests:
- |
set -e
eval BUILD=$BUILD
eval SERVICENAME=$SERVICENAME
eval CONTAINER1=$CONTAINER1
sudo docker run -p 3000 --label ${CONTAINER1} ${DOCKER_REGISTRY}/${CONTAINER1} bash -c "npm test"
sudo docker tag -f ${DOCKER_REGISTRY}/${CONTAINER1}:latest ${DOCKER_REGISTRY}/${CONTAINER1}:build${TDDIUM_TEST_EXEC_ID}
sudo docker push ${DOCKER_REGISTRY}/${CONTAINER1}:build${TDDIUM_TEST_EXEC_ID}
sudo docker push ${DOCKER_REGISTRY}/${CONTAINER1}:latest
chmod +x ./deploy/deploy-service.sh
./deploy/deploy-service.sh ${KUBECONTEXTQA} ${KUBEDEPLOYMENTDIR}

0 comments on commit 0167f5d

Please sign in to comment.