In order to leverage more efficiently the caching mechanism Docker uses when building images, it is important to order commands in your Dockerfile
so that the commands whose output changes more frequently are executed last. Let's take the Dockerfile
from a simple Python app (which you created previously, or otherwise you can find it in the resources for this exercise inside the 1-caching
folder) as an example.
- Build the image once.
- Build the image again, notice that the second time it was very quick and it was saying
---> Using cache
orCACHED
in each instruction. - Now modify the
server.py
file. Simply change one of the strings. - Build the image once more. Notice that it had to install the requirements again, even though they had not changed. This step of installing dependencies can be time-consuming and typically changes much less frequently than your code. That is why best practice dictates that you should install dependencies first before copying your actual source code.
- Modify the
Dockerfile
to copy therequirements.txt
file (but not theserver.py
), install the dependencies, and then copy the actual application code (theserver.py
file). - As before, try building the image, then change the code in
server.py
and build the image again. Notice how this time, the dependencies are retrieved from cache.
You will now explore the advantages of using a multi-stage build to remove unnecessary dependencies from the final distributable image. As this is particularly relevant with languages that require compilation, the example uses a very simple C++ program which calculates whether a number is prime or not.
-
Open a terminal inside the
exercise-06/2-multi-stage-builds-cpp
folder. -
Build the first Dockerfile supplied
docker build -t ex6-2:v1 -f Dockerfile.1 .
-
Run the container to verify it all works
docker run --rm ex6-2:v1 29
You should see the message
29 is a prime number
-
Check the contents of the
Dockerfile.1
. Notice how it uses a base image (gcc
) that already contains the necessary tools for compilation. -
Repeat the above steps for the second Dockerfile
docker build -t ex6-2:v2 -f Dockerfile.2 . docker run --rm ex6-2:v2 29
You should observe the same result as before
-
This
Dockerfile.2
uses a different base imagealpine
, which does not have the compilation tool, so it needs to be installed. -
Open
Dockerfile.3
and spot the differences with the previous one. Notice it does the same initial steps, but then it copies the compiled artefact to a brand newalpine
image, where it only installs the runtime dependencies. -
Build the image. You will notice how the first steps are shown as cached, because they are exactly the same as in
Dockerfile.2
. Remember Docker cache works even across different Dockerfiles.docker build -t ex6-2:v3 -f Dockerfile.3 .
-
Lastly, let's compare the size of the images. You can do this by running:
docker images ex6-2
Unsurprisingly, the last version is the smallest, since it does not contain the very large compile dependencies, but only runtime ones.
You can also use the
docker history
command to inspect the different layers created:docker history ex6-2:v3
In this exercise you are going to use all your Docker knowledge to create an optimized Dockerfile
to deploy an angular application inside of a nginx webserver.
Inside the 3-sample-angular-app
folder you will find a very simple angular application. Create a Dockerfile
inside this folder, to distribute the app so it runs inside nginx
.
Hints:
- Use one of the official
node 14
images link. - Set a workdir like
/app
to copy and build the application. - Dependencies are installed running
npm install
. This only requires access to thepackage.json
file (and optionallypackage-lock.json
if it exists). - You can build the application using the
npm run build
command. This will build it inside thedist/my-app/
folder. For this to work, it requires:- all the files from the
src
folder angular.json
, alltsconfig
files and.browserslistrc
- all the files from the
- Distribute the application inside an
nginx
server. By default, it serves files it finds in the/usr/share/nginx/html/
folder.
Once you are able to build your image successfully, try running the container and opening the sample app in the browser.
As part of the build process of the Angular app, also ensure all tests are executed. This sample app comes with Karma already configured and tests can be executed via npm run test
. However, this requires for the Chrome browser to be installed. Therefore we will need to do the following:
- Ensure you are base image is
node:14
- Create a new stage
test
where you will install Chrome. E.g.RUN apt-get update \ && wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb \ && apt install -y ./google-chrome*.deb
- Set the
CHROME_BIN
environment variable to the location of chrome/usr/bin/google-chrome
- Copy the test configuration files
karma.conf.js
andtsconfig.spec.json
into the root folder. - Run the tests
npm run test -- --no-watch --no-progress --browsers=ChromeHeadlessNoSandbox
In this exercise we are going to analyze the optimization of an existing dockerized application. Using the example in the 4-multi-stage-builds-php
folder. Build the image with the supplied file Dockerfile.1
, and run it. Things to note:
- By default
docker build
looks for aDockerfile
in the context folder. If you want to use a different name you can use the-f
option. - The application is built in
php
. - It runs inside an Apache web server. When requesting the root it displays a welcome message.
- You need to expose port 80 of the container when running the container.
In many languages the requirements to build the application are different to those needed to run it. Multi-stage builds are particularly suited to help reduce the size and complexity of the final image by splitting the build process into separate steps run in different base images.
In this example with php
, in order to build the application it requires composer
(the php
dependency manager). Instead of installing it on the final image, we are going to leverage an existing image that already has it installed.
Try building and running the second version Dockerfile.2
. Notice that, instead of installing composer
manually, it uses an existing image with it installed. However, this image does not contain apache
which is required to run the application. Hence, why the file then has a second FROM
to load a different image. In line 19, it copies, using the --from
parameter, the artifacts from the first stage of the build, which was done in the composer
image, onto the final php:apache
one.
Build the different Dockerfiles and use docker history
to examine the resulting images, can you spot the differences?