- A brief introduction to Continuous Delivery.
- How each tool fits in the big picture.
- The approach I propose, without the pain.
- Help to get you started on your own.
- Advance towards Continuous Deployment.
\textit{“Encouraging greater collaboration between everyone involved in software delivery in order to release valuable software faster and more reliably.”}
- Speed up the release of new features.
- Special focus on risk: automate everything!
- Advance towards Continuous Deployment.
- No need for code freeze.
- Automated tagging.
\textit{“Apache Maven is a software project management and comprehension tool. Based on the concept of a project object model (POM), Maven can manage a project’s build, reporting and documentation from a central piece of information.”}
\includegraphics[width=100pt]{maven.png}\small{http://maven.apache.org}
- All logic is isolated in its own module.
- No multi-module projects, unless for WARs.
- All modules inherit from a common, logic-less module: the parent POM.
- All in-house modules share the same version (\texttt{latest-SNAPSHOT}).
- Actual versions are resolved when generating releases.
\textit{“An extendable open source continuous integration server.”}
\includegraphics[width=100pt]{jenkins.png}- Helper job to automate the tagging and packaging process.
- Checks out parent-pom code.
- Gets launched for every change, and generates a new tagged release.
- Maven jobs in Jenkins run Maven Embedded engine.
- Maven annotates parent jobs as dependencies in the dependency graph.
- The new \textit{release} job cannot be a Maven job.
- Otherwise, it triggers an infinite loop of downstream jobs.
\textit{“An open platform for distributed applications for developers and sysadmins.”}
\includegraphics[width=100pt]{docker-whale-home-logo.png}“The Docker Engine container comprises just the application and its dependencies. It runs as an isolated process in userspace on the host operating system, sharing the kernel with other containers. Thus, it enjoys the resource isolation and allocation benefits of VMs but is much more portable and efficient.”
- \textbf{Image}: Packaged application and dependencies. Ready to launch.
- \textbf{Container}: An isolated (process, memory, network, etc.) environment, running an \textit{image}.
- \textbf{Volume}: A folder within a container, accessible from the host. Can be directly mapped to a folder in the host.
- \textbf{Link}: Docker mechanism to help containers communicate with each other. It’s defined as \texttt{–link container:alias}:
- \textit{container}: the name of the external, already running container,
- \textit{alias}: the name used locally in the new container, pointing to the external container. Docker adds it to /etc/hosts, and defines some environment properties.
- \textbf{Exposed port}: Docker service can map host ports to internal ports, when the container starts.
- A minimal Ubuntu base image modified for Docker-friendliness.
- Takes care of the problem of:
- Zombie processes,
- Logger daemon,
- Cron jobs.
- Motivation explained in their website: “Your Docker image might be broken without you knowing it”
https://phusion.github.io/baseimage-docker/
- Based on wking’s approach and code for Gentoo-based images: https://github.com/wking/dockerfile
- Modified for phusion-baseimage.
- Enhanced with in-house bash scripting framework: dry-wit.
- Allows placeholders in Dockerfiles.
\textit{“MCollective is a powerful orchestration framework.}
\textit{Run actions on thousands of servers simultaneously, using existing plugins or writing your own.”}
\includegraphics[width=100]{mcollective-logo.png}
\small{http://www.puppetlabs.com}
- Simple and straightforward.
- Fast enough up to a certain number of hosts.
- Easy and cheap to adapt to perform different tasks.
- Scriptable.
- Scripts with hard-coded host names or IPs.
- Requires way too much information about the production environment.
- Cannot easily run remote commands which expect some kind of interaction.
- When the number of host grows, the risk of overlook reported problems increases.
- Requires dealing with account permissions, SSO, etc.
- Scales with the number of hosts in production.
- Extendable via plugins.
- Doesn’t require system accounts, SSO on production hosts.
- Puppet module available for servers.
- More complex architecture.
- Requires middleware.
- Scaling beyond certain size requires tuning.
- Middleware should be fault-tolerant.
- Misconfigured setups can generate excessive traffic.
\textit{“Citadel is a toolkit for scheduling containers on a Docker cluster.”}
\includegraphics[width=100]{citadel-logo.png}\small{http://citadeltoolkit.org}
\textit{“Built on the Docker cluster management toolkit Citadel, Shipyard gives you the ability to manage Docker resources [..]”}
Plus: application routing and load balancing, centralized logging, deployment, etc.
\includegraphics[width=100]{shipyard-logo.png}\small{http://shipyard-project.com}
\textit{“Puppet manages your servers: you describe machine configurations in an easy-to-read declarative language, and Puppet will bring your systems into the desired state and keep them there.”}
\includegraphics[width=100]{puppet-logo.png}\small{http://www.puppetlabs.com}
- Images can be deployed anywhere.
- It doesn’t require a convention to map host volumes or data containers.
- Containers can respond to changes propagated via Puppet.
- Containers take much longer to start.
- Automatic generation, auto-sign, and auto-accept SSL certificates.
- Puppet infrastructure required in production.
- Containers are stateless.
- Containers launch fast.
- Containers need to be prepared to read their configuration from plain files.
- The command for launching containers depends on the Puppet configuration for that host.
- Puppet infrastructure required in production.
- Data containers launch the Puppet agent: their configuration can evolve over time.
- Puppet sets up the configuration depending on the environment.
- Launching containers do not depend on the host.
- Puppet infrastructure needed in production.
- SSL certificate magic takes place on data containers.
- Clone my repos: http://github.com/rydnr/dockerfile and http://github.com/rydnr/dry-wit
- Take http://github.com/rydnr/acmsl-jenkins-configs as a template for \textbf{get-new-version} job.
- Build your custom Delivery Pipeline.
- Make Jenkins generate Docker images and push them to a private index.
- Build mcollective-client and mcollective-server images.
- Install shipyard and mcollective server agent in a test environment.
- Launch docker containers from the mcollective client, via mcollective shell agent.
- Try Interlock in the path to Continuous Deployment!