diff --git a/lab-2/README.md b/lab-2/README.md index a158768..7fc4ed9 100644 --- a/lab-2/README.md +++ b/lab-2/README.md @@ -1,25 +1,25 @@ # Docker 101 - Linux (Part 2): Understanding the Docker File System and Volumes -We had an introduction to volumes by way of bind mounts earlier, but let's take a deeper look at the Docker file system and volumes. +We had an introduction to volumes by way of bind mounts earlier, but let's take a deeper look at the Docker file system and volumes. -The [Docker documentation](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/_) gives a great explanation on how storage works with Docker images and containers, but here's the high points. +The [Docker documentation](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/_) gives a great explanation on how storage works with Docker images and containers, but here's the high points. * Images are comprised of layers * These layers are added by each line in a Dockerfile * Images on the same host or registry will share layers if possible * When container is started it gets a unique writeable layer of its own to capture changes that occur while it's running -* Layers exist on the host file system in some form (usually a directory, but not always) and are managed by a [storage driver](https://docs.docker.com/engine/userguide/storagedriver/selectadriver/) to present a logical filesystem in the running container. +* Layers exist on the host file system in some form (usually a directory, but not always) and are managed by a [storage driver](https://docs.docker.com/engine/userguide/storagedriver/selectadriver/) to present a logical filesystem in the running container. * When a container is removed the unique writeable layer (and everything in it) is removed as well -* To persist data (and improve performance) Volumes are used. +* To persist data (and improve performance) Volumes are used. * Volumes (and the directories they are built on) are not managed by the storage driver, and will live on if a container is removed. -The following exercises will help to illustrate those concepts in practice. +The following exercises will help to illustrate those concepts in practice. Let's start by looking at layers and how files written to a container are managed by something called *copy on write*. ## Layers and Copy on Write -> Note: If you have just completed part 1 of the workshop, please close that session and start a new one. +> Note: If you have just completed part 1 of the workshop, please close that session and start a new one. 1. In PWD click "+Add new instance" and move into that command windows. @@ -60,17 +60,17 @@ Let's start by looking at layers and how files written to a container are manage `85b1f47fba49: Already exists` - Notice that the layer id (`85b1f47fba498`) is the same for the first layer of the MySQl image and the only layer in the Debian:Jessie image. And because we already had pulled that layer when we pulled the Debian image, we didn't have to pull it again. + Notice that the layer id (`85b1f47fba498`) is the same for the first layer of the MySQl image and the only layer in the Debian:Jessie image. And because we already had pulled that layer when we pulled the Debian image, we didn't have to pull it again. - So, what does that tell us about the MySQL image? Since each layer is created by a line in the image's *Dockerfile*, we know that the MySQL image is based on the Debian:Jessie base image. We can confirm this by looking at the [Dockerfile on Docker Store](https://github.com/docker-library/mysql/blob/0590e4efd2b31ec794383f084d419dea9bc752c4/5.7/Dockerfile). + So, what does that tell us about the MySQL image? Since each layer is created by a line in the image's *Dockerfile*, we know that the MySQL image is based on the Debian:Jessie base image. We can confirm this by looking at the [Dockerfile on Docker Store](https://github.com/docker-library/mysql/blob/0590e4efd2b31ec794383f084d419dea9bc752c4/5.7/Dockerfile). - The first line in the the Dockerfile is: `FROM debian:jessie` This will import that layer into the MySQL image. + The first line in the the Dockerfile is: `FROM debian:jessie` This will import that layer into the MySQL image. - So layers are created by Dockerfiles and are are shared between images. When you start a container, a writeable layer is added to the base image. + So layers are created by Dockerfiles and are are shared between images. When you start a container, a writeable layer is added to the base image. - Next you will create a file in our container, and see how that's represented on the host file system. + Next you will create a file in our container, and see how that's represented on the host file system. -3. Start a Debian container, shell into it. +3. Start a Debian container, shell into it. ``` $ docker run --tty --interactive --name debian debian:jessie bash @@ -85,19 +85,19 @@ Let's start by looking at layers and how files written to a container are manage bin dev home lib64 mnt proc run srv test-file usrboot etc lib media opt root sbin sys tmp var ``` - We can see `test-file` exists in the root of the containers file system. + We can see `test-file` exists in the root of the containers file system. - What has happened is that when a new file was written to the disk, the Docker storage driver placed that file in it's own layer. This is called *copy on write* - as soon as a change is detected the change is copied into the writeable layer. That layers is represented by a directory on the host file system. All of this is managed by the Docker storage driver. + What has happened is that when a new file was written to the disk, the Docker storage driver placed that file in it's own layer. This is called *copy on write* - as soon as a change is detected the change is copied into the writeable layer. That layers is represented by a directory on the host file system. All of this is managed by the Docker storage driver. 5. Exit the container but leave it running by pressing `ctrl-p` and then `ctrl-q` - The Docker hosts for the labs today use OverlayFS with the [overlay2](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#how-the-overlay2-driver-works) storage driver. + The Docker hosts for the labs today use OverlayFS with the [overlay2](https://docs.docker.com/engine/userguide/storagedriver/overlayfs-driver/#how-the-overlay2-driver-works) storage driver. - OverlayFS layers two directories on a single Linux host and presents them as a single directory. These directories are called layers and the unification process is referred to as a union mount. OverlayFS refers to the lower directory as lowerdir and the upper directory a upperdir. "Upper" and "Lower" refer to when the layer was added to the image. In our example the writeable layer is the most "upper" layer. The unified view is exposed through its own directory called merged. + OverlayFS layers two directories on a single Linux host and presents them as a single directory. These directories are called layers and the unification process is referred to as a union mount. OverlayFS refers to the lower directory as lowerdir and the upper directory a upperdir. "Upper" and "Lower" refer to when the layer was added to the image. In our example the writeable layer is the most "upper" layer. The unified view is exposed through its own directory called merged. - We can use Docker's *inspect* command to look at where these directories live on our Docker host's file system. + We can use Docker's *inspect* command to look at where these directories live on our Docker host's file system. - > Note: The *inspect* command uses Go templates to allow us to extract out specific information from its output. For more information on how these templates work with *inspect* read this [excellent tutorial](http://container-solutions.com/docker-inspect-template-magic/). + > Note: The *inspect* command uses Go templates to allow us to extract out specific information from its output. For more information on how these templates work with *inspect* read this [excellent tutorial](http://container-solutions.com/docker-inspect-template-magic/). ``` $ docker inspect -f '{{json .GraphDriver.Data}}' debian | jq @@ -108,11 +108,12 @@ Let's start by looking at layers and how files written to a container are manage "WorkDir": "/var/lib/docker/overlay2/0dad4d523351851af4872f8c6706fbdf36a6fa60dc7a29fff6eb388bf3d7194e/work" } ``` + > Note: `WorkDir` is a working directory for the Overlay2 driver - Since the change we made is the newest modification to the Debian container's file system, it's going to be stored in `UpperDir`. + Since the change we made is the newest modification to the Debian container's file system, it's going to be stored in `UpperDir`. -6. List the contents of the `UpperDir`. +6. List the contents of the `UpperDir`. ``` $ cd $(docker inspect -f {{.GraphDriver.Data.UpperDir}} debian) @@ -134,9 +135,9 @@ Let's start by looking at layers and how files written to a container are manage dev lib mnt root srv tmp ``` - Notice that the directory on our host file system has the same contents as the one inside the container. That's because that directory is what we see in the container. + Notice that the directory on our host file system has the same contents as the one inside the container. That's because that directory is what we see in the container. - > Warning: You should NEVER manipulate your container's file system via the Docker host. This is only being done as an academic exercise. + > Warning: You should NEVER manipulate your container's file system via the Docker host. This is only being done as an academic exercise. 8. Write a new file to the host file system in the `UpperDir`, and list the directory to see the contents @@ -149,7 +150,6 @@ Let's start by looking at layers and how files written to a container are manage test-file test-file2 ``` - 9. Move back into your Debian container and list the root file system ``` @@ -159,8 +159,8 @@ Let's start by looking at layers and how files written to a container are manage bin dev home lib64 mnt proc run srv test-file tmp var boot etc lib media opt root sbin sys test-file2 usr ``` - - The file that was created on the local host filesystem (`test-file2`) is now available in the container as well. + + The file that was created on the local host filesystem (`test-file2`) is now available in the container as well. 10. Type `exit` to stop your container, which will also stop it @@ -184,7 +184,7 @@ Let's start by looking at layers and how files written to a container are manage test-file test-file2 ``` - Because the container still exists, the files are still available on your file system. At this point you could `docker start` your container and it would be just as it was before you exited. + Because the container still exists, the files are still available on your file system. At this point you could `docker start` your container and it would be just as it was before you exited. However, if we remove the container, the directories on the host file system will be removed, and your changes will be gone @@ -199,12 +199,12 @@ Let's start by looking at layers and how files written to a container are manage The files that were created are now gone. You've actually been left in a sort of "no man's land" as the directory you're in has actually been deleted as well. -14. Copy the directory location from the prompt in the terminal. +14. Copy the directory location from the prompt in the terminal. 15. CD back to your home directory ``` - $ cd + cd ``` 16. Attempt to list the contents of the old `UpperDir` directory. @@ -216,17 +216,17 @@ Let's start by looking at layers and how files written to a container are manage ## Understanding Docker Volumes -[Docker volumes](https://docs.docker.com/engine/admin/volumes/volumes/) are directories on the host file system that are not managed by the storage driver. Since they are not managed by the storage drive they offer a couple of important benefits. +[Docker volumes](https://docs.docker.com/engine/admin/volumes/volumes/) are directories on the host file system that are not managed by the storage driver. Since they are not managed by the storage drive they offer a couple of important benefits. * **Performance**: Because the storage driver has to create the logical filesystem in the container from potentially many directories on the local host, accessing data can be slow. Especially if there is a lot of write activity to that container. In fact you should try and minimize the amount of writes that happen to the container's filesystem, and instead direct those writes to a volume -* **Persistence**: Volumes are not removed when the container is deleted. They exist until explicitly removed. This means data written to a volume can be reused by other containers. +* **Persistence**: Volumes are not removed when the container is deleted. They exist until explicitly removed. This means data written to a volume can be reused by other containers. -Volumes can be anonymous or named. Anonymous volumes have no way for the to be explicitly referenced. They are almost exclusively used for performance reasons as you cannot persist data effectively with anonymous volumes. Named volumes can be explicitly referenced so they can be used to persist data and increase performance. +Volumes can be anonymous or named. Anonymous volumes have no way for the to be explicitly referenced. They are almost exclusively used for performance reasons as you cannot persist data effectively with anonymous volumes. Named volumes can be explicitly referenced so they can be used to persist data and increase performance. -The next sections will cover both anonymous and named volumes. +The next sections will cover both anonymous and named volumes. -> Special Note: These next sections were adapted from [Arun Gupta's](https://twitter.com/arungupta) excellent [tutorial](http://blog.arungupta.me/docker-mysql-persistence/) on persisting data with MySQL. +> Special Note: These next sections were adapted from [Arun Gupta's](https://twitter.com/arungupta) excellent [tutorial](http://blog.arungupta.me/docker-mysql-persistence/) on persisting data with MySQL. ### Anonymous Volumes @@ -238,8 +238,7 @@ VOLUME /var/lib/mysql This line sets up an anonymous volume in order to increase database performance by avoiding sending a bunch of writes through the Docker storage driver. -Note: An anonymous volume is a volume that hasn't been explicitly named. This means that it's extremely difficult to use the volume later with a new container. Named volumes solve that problem, and will be covered later in this section. - +Note: An anonymous volume is a volume that hasn't been explicitly named. This means that it's extremely difficult to use the volume later with a new container. Named volumes solve that problem, and will be covered later in this section. 1. Start a MySQL container @@ -252,7 +251,6 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me 2. Use Docker inspect to view the details of the anonymous volume - ``` $ docker inspect -f 'in the {{.Name}} container {{(index .Mounts 0).Destination}} is mapped to {{(index .Mounts 0).Source}}' mysqldb in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/cd79b3301df29d13a068d624467d6080354b81e34d794b615e6e93dd61f89628/_data @@ -273,7 +271,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me Notice the the directory name starts with `/var/lib/docker/volumes/` whereas for directories managed by the Overlay2 storage driver it was `/var/lib/docker/overlay2` - As mentined anonymous volumes will not persist data between containers, they are almost always used to increase performance. + As mentined anonymous volumes will not persist data between containers, they are almost always used to increase performance. 4. Shell into your running MySQL container and log into MySQL @@ -326,7 +324,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me 1 row in set (0.00 sec) ``` -6. Exit MySQL and the MySQL container. +6. Exit MySQL and the MySQL container. ``` mysql> exit @@ -385,7 +383,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me 1 row in set (0.00 sec) ``` -10. Exit MySQL and the MySQL container. +10. Exit MySQL and the MySQL container. ``` mysql> exit @@ -395,7 +393,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me exit ``` - The table persisted across container restarts, which is to be expected. In fact, it would have done this whether or not we had actually used a volume as shown in the previous section. + The table persisted across container restarts, which is to be expected. In fact, it would have done this whether or not we had actually used a volume as shown in the previous section. 11. Let's look at the volume again @@ -404,11 +402,11 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/cd79b3301df29d13a068d624467d6080354b81e34d794b615e6e93dd61f89628/_data ``` - We do see the volume was not affected by the container restart either. + We do see the volume was not affected by the container restart either. - Where people often get confused is in expecting that the anonymous volume can be used to persist data BETWEEN containers. + Where people often get confused is in expecting that the anonymous volume can be used to persist data BETWEEN containers. - To examine that delete the old container, create a new one with the same command, and check to see if the table exists. + To examine that delete the old container, create a new one with the same command, and check to see if the table exists. 12. Remove the current MySQL container @@ -431,7 +429,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me in the /mysqldb container /var/lib/mysql is mapped to /var/lib/docker/volumes/e0ffdc6b4e0cfc6e795b83cece06b5b807e6af1b52c9d0b787e38a48e159404a/_data ``` - Notice this directory is different than before. + Notice this directory is different than before. 15. Shell back into the running container and log into MySQL @@ -464,7 +462,7 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me Empty set (0.00 sec) ``` -17. Exit MySQL and the MySQL container. +17. Exit MySQL and the MySQL container. ``` mysql> exit @@ -481,15 +479,15 @@ Note: An anonymous volume is a volume that hasn't been explicitly named. This me mysqldb ``` -So while a volume was used to store the new table in the original container, because it wasn't a named volume the data could not be persisted between containers. +So while a volume was used to store the new table in the original container, because it wasn't a named volume the data could not be persisted between containers. To achieve persistence a named volume should be used. ### Named Volumes -A named volume (as the name implies) is a volume that's been explicitly named and can easily be referenced. +A named volume (as the name implies) is a volume that's been explicitly named and can easily be referenced. -A named volume can be create on the command line, in a docker-compose file, and when you start a new container. They [CANNOT be created as part of the image's dockerfile](https://github.com/moby/moby/issues/30647). +A named volume can be create on the command line, in a docker-compose file, and when you start a new container. They [CANNOT be created as part of the image's dockerfile](https://github.com/moby/moby/issues/30647). 1. Start a MySQL container with a named volume (`dbdata`) @@ -504,9 +502,9 @@ A named volume can be create on the command line, in a docker-compose file, and mysql ``` - Because the newly created volume is empty, Docker will copy over whatever existed in the container at `/var/lib/mysql` when the container starts. + Because the newly created volume is empty, Docker will copy over whatever existed in the container at `/var/lib/mysql` when the container starts. - Docker volumes are primatives just like images and containers. As such, they can be listed and removed in the same way. + Docker volumes are primatives just like images and containers. As such, they can be listed and removed in the same way. 2. List the volumes on the Docker host @@ -536,7 +534,7 @@ A named volume can be create on the command line, in a docker-compose file, and ] ``` - Any data written to `/var/lib/mysql` in the container will be rerouted to `/var/lib/docker/volumes/mydbdata/_data` instead. + Any data written to `/var/lib/mysql` in the container will be rerouted to `/var/lib/docker/volumes/mydbdata/_data` instead. 4. Shell into your running MySQL container and log into MySQL @@ -580,7 +578,7 @@ A named volume can be create on the command line, in a docker-compose file, and 1 row in set (0.00 sec) ``` -6. Exit MySQL and the MySQL container. +6. Exit MySQL and the MySQL container. ``` mysql> exit @@ -593,12 +591,12 @@ A named volume can be create on the command line, in a docker-compose file, and 7. Remove the MySQL container ``` - $ docker container rm --force mysqldb + docker container rm --force mysqldb ``` - Because the MySQL was writing out to a named volume, we can start a new container with the same data. + Because the MySQL was writing out to a named volume, we can start a new container with the same data. - When the container starts it will not overwrite existing data in a volume. So the data created in the previous steps will be left intact and mounted into the new container. + When the container starts it will not overwrite existing data in a volume. So the data created in the previous steps will be left intact and mounted into the new container. 8. Start a new MySQL container @@ -652,9 +650,9 @@ A named volume can be create on the command line, in a docker-compose file, and 1 row in set (0.00 sec) ``` - The data will exist until the volume is explicitly deleted. + The data will exist until the volume is explicitly deleted. -11. Exit MySQL and the MySQL container. +11. Exit MySQL and the MySQL container. ``` mysql> exit @@ -674,4 +672,4 @@ A named volume can be create on the command line, in a docker-compose file, and mydbdata ``` - If a new container was started with the previous command, it would create a new empty volume. + If a new container was started with the previous command, it would create a new empty volume. diff --git a/lab-3/README.md b/lab-3/README.md index d4620ec..1ac64ab 100644 --- a/lab-3/README.md +++ b/lab-3/README.md @@ -1,8 +1,8 @@ # Docker 101 - Linux (Part 3): Docker Swarm and Container Networking -So far all of the previous exercises have been based around running a single container on a single host. +So far all of the previous exercises have been based around running a single container on a single host. -This section will cover how to use multiple hosts to provide fault tolerance as well as incresed performance. As part of that discussion it will also provide an overview of Docker's multi-host networking capabilities. +This section will cover how to use multiple hosts to provide fault tolerance as well as incresed performance. As part of that discussion it will also provide an overview of Docker's multi-host networking capabilities. #### What is "Orchestration" @@ -10,12 +10,11 @@ If you heard about containers you've probably heard about orchstration as well. Clustering is the concept of taking a group of machines and treating them as a single computing resource. These machines are capable of accepting any workload becaues they all offer the same capabilities. These clustered machines don't have to be running on the same infrastructure - they could be a mix of bare metal and VMs for instance. -Scheduling is the process of deciding where a workload should reside. When an admin starts a new instance of website she can decide what region it needs to go on or if it should be on bare metal or in the cloud. The scheduler will make that happen. Schedulers also make sure that the application maintains its desired state. For example, if there were 10 copies of a web site running, and one of them crashed, the scheduler would know this and start up a new instance to take the failed one's place. +Scheduling is the process of deciding where a workload should reside. When an admin starts a new instance of website she can decide what region it needs to go on or if it should be on bare metal or in the cloud. The scheduler will make that happen. Schedulers also make sure that the application maintains its desired state. For example, if there were 10 copies of a web site running, and one of them crashed, the scheduler would know this and start up a new instance to take the failed one's place. +With Docker ther is a built-in orchestrator: Docker Swarm. It provides both clustering and scheduling as well as many other advanced services. -With Docker ther is a built-in orchestrator: Docker Swarm. It provides both clustering and scheduling as well as many other advanced services. - -The next part of the lab will start with the deployment of a 3 node Docker swarm cluster. +The next part of the lab will start with the deployment of a 3 node Docker swarm cluster. ### Build your cluster @@ -29,7 +28,7 @@ Note: If you have just completed a previous part of the workshop, please close t 3. Click the `+ Add New Instance` - There are now three standalone Docker hosts. + There are now three standalone Docker hosts. 4. In the console for `node1` initialize Docker Swarm @@ -44,7 +43,7 @@ Note: If you have just completed a previous part of the workshop, please close t To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. ``` - `node1` is now a Swarm manager node. Manager nodes are responsible for ensuring the integrity of the cluster as well as managing running services. + `node1` is now a Swarm manager node. Manager nodes are responsible for ensuring the integrity of the cluster as well as managing running services. 5. Copy the `docker swarm join` output from `node1` @@ -63,7 +62,7 @@ Note: If you have just completed a previous part of the workshop, please close t This node joined a swarm as a worker. ``` - The three nodes have now been clustered into a single Docker swarm. An important thing to note is that clusters can be made up of Linux nodes, Windows nodes, or a combination of both. + The three nodes have now been clustered into a single Docker swarm. An important thing to note is that clusters can be made up of Linux nodes, Windows nodes, or a combination of both. 8. Switch back to `node1` @@ -77,20 +76,20 @@ Note: If you have just completed a previous part of the workshop, please close t xflngp99u1r9pn7bryqbbrrvq win000046 Ready Active ``` - Commands against the swarm can only be issued from the manager node. Attempting to run the above command against `node2` or `node3` would result in an error. + Commands against the swarm can only be issued from the manager node. Attempting to run the above command against `node2` or `node3` would result in an error. ``` $ docker node ls Error response from daemon: This node is not a swarm manager. Worker nodes can't be used to view or modify cluster state. Please run this command on a manager node or promote the current node to a manager. ``` -With the Swarm cluster built it's time to move to a discussion on networking. +With the Swarm cluster built it's time to move to a discussion on networking. Docker supports several different networking options, but this lab will cover the two most popular: bridge and overlay. -Bridge networks are only available on the local host, and can be created on hosts in swarm clusters as well as standalone hosts. However, in a swarm cluster, even though the machines are tied together, bridge networks only work on the host on which they were created. +Bridge networks are only available on the local host, and can be created on hosts in swarm clusters as well as standalone hosts. However, in a swarm cluster, even though the machines are tied together, bridge networks only work on the host on which they were created. -Overlay networks faciliate the creation of networks that span Docker hosts. While it's possible to network together hosts that are not in a Swarm cluster, it's a very manual task requiring the addition of an external key value store. With Docker swarm creating overlay networks is trivial. +Overlay networks faciliate the creation of networks that span Docker hosts. While it's possible to network together hosts that are not in a Swarm cluster, it's a very manual task requiring the addition of an external key value store. With Docker swarm creating overlay networks is trivial. ### Bridge networking overview @@ -118,7 +117,7 @@ As previously mentioned, bridge networks faciliate the create of software-define The newly created `mybridge` network is listed. - > Note: Docker creates several networks by default, however the purpose of those networks is outside the scope of this workshop. + > Note: Docker creates several networks by default, however the purpose of those networks is outside the scope of this workshop. 3. Switch to `node2` @@ -134,7 +133,7 @@ As previously mentioned, bridge networks faciliate the create of software-define 3dec80db87e4 none null local ``` - Notice that the same networks names exist on `node2` but their ID's are different. And, `mybridge` does not show up at all. + Notice that the same networks names exist on `node2` but their ID's are different. And, `mybridge` does not show up at all. 5. Move back to `node1` @@ -152,7 +151,8 @@ As previously mentioned, bridge networks faciliate the create of software-define Status: Downloaded newer image for alpine:latest 974903580c3e452237835403bf3a210afad2ad1dff3e0b90f6d421733c2e05e6 ``` - > Note: We run the `top` process to keep the container from exiting as soon as it's created. + + > Note: We run the `top` process to keep the container from exiting as soon as it's created. 7. Start another Alpine container named `alpine client` @@ -171,7 +171,7 @@ As previously mentioned, bridge networks faciliate the create of software-define ping: bad address 'alpine_host' ``` - Because the two containers are not on the same network they cannot reach each other. + Because the two containers are not on the same network they cannot reach each other. 9. Inspect `alpine_host` and `alpine_client` to see which networks they are attached to. @@ -183,7 +183,7 @@ As previously mentioned, bridge networks faciliate the create of software-define map[bridge:0xc4204420c0] ``` - `alpine_host` is, as expected, attached to the `mybridge` network. + `alpine_host` is, as expected, attached to the `mybridge` network. `alpine_client` is attached to the default bridge network `bridge` @@ -194,7 +194,7 @@ As previously mentioned, bridge networks faciliate the create of software-define alpine_client ``` -11. Start another container called `alpine_client` but attach it to the `mybridge` network this time. +11. Start another container called `alpine_client` but attach it to the `mybridge` network this time. ``` $ docker container run \ @@ -228,9 +228,9 @@ As previously mentioned, bridge networks faciliate the create of software-define round-trip min/avg/max = 0.088/0.106/0.122 ms ``` - Something to notice is that it was not necessary to specify an IP address. Docker has a built in DNS that resolved `alpine_client` to the correct address. + Something to notice is that it was not necessary to specify an IP address. Docker has a built in DNS that resolved `alpine_client` to the correct address. -Being able to network containers on a single host is not extremely useful. It might be fine for a simple test envrionment, but production environments require the ability provide the scalability and fault tolerance that comes from having multiple interconnected hosts. +Being able to network containers on a single host is not extremely useful. It might be fine for a simple test envrionment, but production environments require the ability provide the scalability and fault tolerance that comes from having multiple interconnected hosts. This is where overlay networking comes in. @@ -238,7 +238,7 @@ This is where overlay networking comes in. Overlay networks in Docker are software defined networks that span multiple hosts (unlike a bridge network which is limited to a single Docker host). This allows containers on different hosts to easily communicate on the Docker networking fabric (vs having to move out to the host's network). -This next section covers building an overlay network and having two containers communicate with each other. +This next section covers building an overlay network and having two containers communicate with each other. 1. Remove the existing Alpine containers @@ -257,7 +257,7 @@ This next section covers building an overlay network and having two containers c > Note: We have to use the `--attachable` flag because by default you cannot use `docker run` on overlay networks that are part of a swarm. The preferred method is to use a Docker *service* which is covered later in the workshop. -3. List the networks on the host to verify that the `myoverlay` network was created. +3. List the networks on the host to verify that the `myoverlay` network was created. ``` $ docker network ls @@ -271,7 +271,7 @@ This next section covers building an overlay network and having two containers c dbd52ffda3ae none null local ``` -3. Create an Alpine container and attach it to the `myoverlay` network. +3. Create an Alpine container and attach it to the `myoverlay` network. ``` $ docker container run \ @@ -298,7 +298,7 @@ This next section covers building an overlay network and having two containers c Notice anything out of the ordinary? Where's the `myoverlay` network? - Docker won't extend the network to hosts where it's not needed. In this case, there are no containers attached to `myoverlay` on `node2` so the network has not been extended to the host. + Docker won't extend the network to hosts where it's not needed. In this case, there are no containers attached to `myoverlay` on `node2` so the network has not been extended to the host. 6. Start an alpine container and attach it to `myoverlay` @@ -342,6 +342,7 @@ This next section covers building an overlay network and having two containers c 64 bytes from 10.0.0.2: seq=3 ttl=64 time=0.201 ms 64 bytes from 10.0.0.2: seq=4 ttl=64 time=0.137 ms ``` + Networking also works betwen Linux and Windows nodes 9. Move to the `node3` @@ -367,7 +368,7 @@ This next section covers building an overlay network and having two containers c While we have been using `docker run` to instantiate docker containers on our Swarm cluster, the preferred way to actually run applications is via a [*service*](https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/). -Services are an abstraction in Docker that represent an application or component of an application. For instance a web front end service connecting to a database backend service. You can deploy an application made up of a single service. In fact, this is quite common when [modernizing traditional applications](https://goto.docker.com/rs/929-FJL-178/images/SB_MTA_04.14.2017.pdf). +Services are an abstraction in Docker that represent an application or component of an application. For instance a web front end service connecting to a database backend service. You can deploy an application made up of a single service. In fact, this is quite common when [modernizing traditional applications](https://goto.docker.com/rs/929-FJL-178/images/SB_MTA_04.14.2017.pdf). The service construct provides a host of useful features including: @@ -379,9 +380,9 @@ The service construct provides a host of useful features including: * Upgrades and rollback * Scaling -This workshop cannot possibly cover all these topics, but will address several key points. +This workshop cannot possibly cover all these topics, but will address several key points. -This lab will deploy a two service application. The application features a Java-based web front end running on Linux, and a Microsoft SQL server running on Windows. +This lab will deploy a two service application. The application features a Java-based web front end running on Linux, and a Microsoft SQL server running on Windows. ### Deploying an Application with Docker Swarm @@ -406,12 +407,13 @@ This lab will deploy a two service application. The application features a Java redis ywlkfxw2oim67fuf9tue7ndyi ``` + The service is created with the following parameters: * `--name`: Gives the service an easily remembered name * `--endpoint-mode`: Today all services running on Windows need to be started in DNS round robin mode. * `--network`: Attaches the containers from our service to the `atsea` network - * `--publish`: Exposes port 6379 but only on the host. + * `--publish`: Exposes port 6379 but only on the host. * `--detach`: Runs the service in the background * Our service is based off the image `sixeyed/atsea-db:mssql` @@ -465,8 +467,7 @@ We've successfully deployed our application. Another key point is that our application code knows nothing about our networking code. The only thing it knows is that the database hostname is going to be `database`. So in our application code database connection string looks like this; - -So long as the database service is started with the name `database` and is on the same Swarm network, the two services can talk. +So long as the database service is started with the name `database` and is on the same Swarm network, the two services can talk. ### Upgrades and Rollback @@ -499,7 +500,7 @@ A common scenario is the need to upgrade an application or application component suee368vg3r1 \_ appserver.1 layer5/awesomeapp:1.0 node1 Shutdown Shutdown 24 seconds ago ``` - Clearly there is some issue, as the containers are failing to start. + Clearly there is some issue, as the containers are failing to start. 4. Check on the satus of the update @@ -512,9 +513,9 @@ A common scenario is the need to upgrade an application or application component } ``` - Because we had set ` --update-failure-action` to pause, Swarm paused the update. + Because we had set `--update-failure-action` to pause, Swarm paused the update. - In the case of failed upgrade, Swarm makes it easy to recover. Simply issue the `--rollback` command to the service. + In the case of failed upgrade, Swarm makes it easy to recover. Simply issue the `--rollback` command to the service. 5. Roll the service back to the original version @@ -526,7 +527,7 @@ A common scenario is the need to upgrade an application or application component appserver ``` - 6. Check on the status of the service +6. Check on the status of the service ``` $ docker service ps appserver @@ -538,11 +539,11 @@ A common scenario is the need to upgrade an application or application component z7toh7jwk8qf \_ appserver.1 layer5/awesomeapp:1.0 node1 Shutdown Shutdown about a minute ago ``` - The top line shows the service is back on the `1.0` version, and running. + The top line shows the service is back on the `1.0` version, and running. 7. Visit the website to makes sure it's running - That was a simulated upgrade failure and rollback. Next the service will be successfully upgraded to version 3 of the app. + That was a simulated upgrade failure and rollback. Next the service will be successfully upgraded to version 3 of the app. 8. Upgrade to version 3 @@ -570,7 +571,7 @@ A common scenario is the need to upgrade an application or application component ### Scale the front end -The new update has really increased traffic to the site. As a result we need to scale our web front end out. This is done by issuing a `docker service update` and specifying the number of replicas to deploy. +The new update has really increased traffic to the site. As a result we need to scale our web front end out. This is done by issuing a `docker service update` and specifying the number of replicas to deploy. 1. Scale to 6 replicas of the web front-end @@ -601,16 +602,17 @@ The new update has really increased traffic to the site. As a result we need to jqkokd2uoki6 appserver.6 layer5/awesomeapp:3.0 node1 Running Running 12 seconds ag ``` -Docker is starting up 5 new instances of the appserver, and is placing them across both the nodes in the cluster. +Docker is starting up 5 new instances of the appserver, and is placing them across both the nodes in the cluster. -When all 6 nodes are running, move on to the next step. +When all 6 nodes are running, move on to the next step. ### Failure and recovery -The next exercise simulates a node failure. When a node fails the containers that were running there are, of course, lost as well. Swarm is constantly monitoring the state of the cluster, and when it detects an anomoly it attemps to bring the cluster back in to compliance. -In it's current state, Swarm expects there to be six instances of the appserver. When the node "fails" thre of those instances will go out of service. +The next exercise simulates a node failure. When a node fails the containers that were running there are, of course, lost as well. Swarm is constantly monitoring the state of the cluster, and when it detects an anomoly it attemps to bring the cluster back in to compliance. + +In it's current state, Swarm expects there to be six instances of the appserver. When the node "fails" thre of those instances will go out of service. -1. Putting a node into *drain* mode forces it to stop all the running containers it hosts, as well as preventing it from running any additional containers. +1. Putting a node into *drain* mode forces it to stop all the running containers it hosts, as well as preventing it from running any additional containers. ``` $ docker node update \ diff --git a/lab-4/README.md b/lab-4/README.md index 21bd4b6..5b2b51b 100644 --- a/lab-4/README.md +++ b/lab-4/README.md @@ -1,4 +1,5 @@ # Docker 101 - Linux (Part 4): Container Monitoring + Now, that you are up and running, let's expose your application to outside the service mesh. -https://meshery.io +