$ cd fundamentals-of-devops - -$ mkdir -p ch3/ansible - -$ cd ch3/ansible- -
diff --git a/ch3/ansible/roles/nodejs-app/tasks/main.yml b/ch3/ansible/roles/nodejs-app/tasks/main.yml index b134d09..11df3a5 100644 --- a/ch3/ansible/roles/nodejs-app/tasks/main.yml +++ b/ch3/ansible/roles/nodejs-app/tasks/main.yml @@ -1,15 +1,15 @@ -- name: Add Node packages to yum +- name: Add Node packages to yum shell: curl -fsSL https://rpm.nodesource.com/setup_21.x | bash - - name: Install Node.js yum: name: nodejs -- name: Create app user +- name: Create app user user: name: app-user -- name: Install pm2 +- name: Install pm2 npm: name: pm2 version: latest diff --git a/ch3/tofu/modules/lambda/outputs.tf b/ch3/tofu/modules/lambda/outputs.tf index 0313e21..9b6556d 100644 --- a/ch3/tofu/modules/lambda/outputs.tf +++ b/ch3/tofu/modules/lambda/outputs.tf @@ -1,3 +1,7 @@ output "function_arn" { value = aws_lambda_function.function.arn +} + +output "iam_role_arn" { + value = aws_iam_role.lambda.arn } \ No newline at end of file diff --git a/docs/02.html b/docs/02.html index a42e0c1..e2b7a43 100644 --- a/docs/02.html +++ b/docs/02.html @@ -921,41 +921,39 @@
Sometimes, this is done because your company is resource constrained (e.g., a tiny startup), and you can’t afford to - -buy proper servers.
+Your company is resource constrained (e.g., a tiny startup), and you can’t afford to buy proper servers.
Sometimes, this is done because the person running the app didn’t know any better.
+The person running the app doesn’t know any better.
Sometimes, this is done because the company’s software delivery process is so slow and cumbersome that the only way +
The company’s software delivery process is so slow and cumbersome that the only way to get something running quickly -to get something running quickly is to sneak it onto an office computer.
+is to sneak it onto an office computer.The cloud industry is massive and growing at an incredible rate. For example, as of the end of -2023, AWS alone is a $100 billion per year run-rate +2023, AWS alone is a -business.[5] +$100 -Yup, $100 billion per year. That’s with a "B." That’s not a typo. And they are still growing fast. The amount of +billion per year run-rate business. Yup, $100 billion per year. That’s billion with a "B." That’s not a typo. And -money that major cloud providers can invest in their offerings utterly dwarfs what almost any other company could +they are still growing fast. The amount of money that major cloud providers can invest in their offerings utterly -ever hope to invest into an on-prem deployment. So the cloud is not only already way ahead of where you could get to +dwarfs what almost any other company could ever hope to invest into an on-prem deployment. So the cloud is not only -today with on-prem, but that lead will only widen over time.
+already way ahead of where you could get to today with on-prem, but that lead will only widen over time. @@ -1868,7 +1864,7 @@It’s worth mentioning that it doesn’t have to be cloud vs on-prem; it can also be cloud and on-prem, as discussed + +next.
+ +It’s worth mentioning that it doesn’t have to be cloud vs on-prem; it can also be cloud and on-prem. That is, you use - -both, which is known as a hybrid deployment. The most common use cases for this are:
+A hybrid deployment is when you use a mixture of cloud and on-prem. The most common use cases for this are:
My goal with this blog post series is to allow as many readers as possible to try out the examples, and to do so as @@ -2038,7 +2034,7 @@
About a year later, in 2007, a company called Heroku came out with one of the first Platform as a Service (PaaS) -offerings.[7] The +offerings.[6] The key difference with PaaS is that the focus is on higher level primitives: not just the underlying infrastructure (i.e., @@ -2090,7 +2086,7 @@
AdministratorAccess
Managed Policy to your IAM user (search for it, and click the checkbox next to it),
-as shown in Figure 2.[8]
+as shown in Figure 2.[7]
As a general rule, you want to use a PaaS whenever you can, and only move on to IaaS when a PaaS can no longer meet + +your requirements, as discussed in the next section.
+ +As a general rule, you want to use a PaaS whenever you can, and only move on to IaaS when a PaaS can no longer meet - -your requirements. Here are a few of the most common cases where IaaS is usually the best choice:
+Here are a few of the most common cases where IaaS is usually the best choice:
Your business may need to provide uptime guarantees (e.g., SLAs, a topic you’ll learn more about in +
Your business may need to provide uptime guarantees (e.g., service level objectives, or SLAs, a topic you’ll learn + +more about in Part 10) that are higher than what your PaaS can provide. Moreover, when there -Part 10) that are higher than what your PaaS can provide. Moreover, when there is an outage +is an outage or a bug, PaaS offerings are often limited in the type of visibility and connectivity options they -or a bug, PaaS offerings are often limited in the type of visibility and connectivity options they provide: e.g., +provide: e.g., many PaaS offerings don’t let you SSH to the server (e.g., this has been a limitation in Heroku for -many PaaS offerings don’t let you SSH to the server (e.g., this has been a limitation in Heroku for over a decade), +over a decade), which can make debugging a lot harder. As your company and architecture grow larger and more -which can make debugging a lot harder. As your company and architecture grow larger and more complicated, being able +complicated, being able to introspect your systems becomes more and more important, and this may be a reason to go -to introspect your systems becomes more and more important, and this may be a reason to go with IaaS over PaaS.
+with IaaS over PaaS.Here’s an outline of this blog post:
+Here’s what we’ll cover in this blog post:
Before digging into the details of various IaC tools, it’s worth asking, why bother? Learning and adopting new tools has -a cost, so what are the benefits of IaC that make this worthwhile?
+a cost, so what are the benefits of IaC that make this worthwhile? This is the focus of the next section.The answer is that when your infrastructure is defined as code, you are able to use a wide variety of software +
When your infrastructure is defined as code, you are able to use a wide variety of software engineering practices to -engineering practices to dramatically improve your software delivery process, including the following:
+dramatically improve your software delivery process, including the following:The examples in this blog post are still simplified for learning and not suitable for production -usage, due to the security concerns and user data limitations explained in Watch out for snakes: these are simplified examples for learning, not for production. You’ll +usage, due to the security concerns and user data limitations explained in Watch out for snakes: these examples have several problems. You’ll see how to work around some of these limitations starting in the next chapter.
@@ -1804,7 +1804,7 @@Now that you have a server to work with, you can see what configuration management tools are really designed to do: + +configuring servers to run software.
+ +Now that you have a server to work with, you can see what configuration management tools are really designed to do: - -configuring servers to run software. The first step is to tell Ansible what server(s) you want to configure, or what +
In order for Ansible to be able to configure your servers, you have to provide an inventory, which is a file that -Ansible calls its inventory. If you have a set of physical servers on-prem, you can put the IP addresses of those +specifies which servers you want configured, and how to connect to them. If you have a set of physical servers on-prem, -servers in an inventory file, as shown in Example 6:
+you can put the IP addresses of those servers in an inventory file, as shown in Example 6:Configure the servers using an Ansible role called "sample-app," as discussed next.
+Configure the servers using an Ansible role called sample-app
, as discussed next.
To create the sample-app role for this playbook, create a roles/sample-app folder in the same directory as +
To create the sample-app
role for this playbook, create a roles/sample-app folder in the same directory as
configure_sample_app_playbook.yml:
A container emulates the user space of an OS.[10] You run a container engine, such as Docker or cri-o, to create isolated processes, memory, mount +
A container emulates the user space of an OS.[9] You run a container engine, such as Docker or cri-o, to create isolated processes, memory, mount points, and networking.
@@ -3166,7 +3172,7 @@ap-southeast-2
(Sydney). Within each region, there are multiple isolated datacenters known as Availability
-Zones (AZs), such as us-east-2a
, us-east-2b
, and so on.[12] There are
+Zones (AZs), such as us-east-2a
, us-east-2b
, and so on.[11] There are
many other settings you can configure on this provider, but for now, let’s keep it simple.
@@ -4404,7 +4410,7 @@ Here’s an outline of this post:
+Here’s what we’ll cover in this post:
Let’s get started by understanding exactly what orchestration is, and why it’s important.
+ +You need a way to initially deploy one or more replicas of your app onto your servers. After the initial deployment, +
You need a way to initially deploy one or more replicas of your app onto your servers.
-you need a way to periodically roll out updates to all replicas of your app, and in most cases, you want a way to +After the initial deployment, you need a way to periodically roll out updates to all replicas of your app, and in + +most cases, you want a way to roll out those updates without your users experiencing downtime (known as a + +zero-downtime deployment).
I’ve seen companies use a variety of tools for implementing this approach, including configuration management -tools (e.g., Ansible, Chef, Puppet), specialized deployment scripts (e.g., Capistrano, +tools (e.g., Ansible, Chef, Puppet), -Deployer, Mina, Fabric, +specialized deployment scripts (e.g., Capistrano, Deployer, -Shipit), and, perhaps the most common approach, thousands and thousands of +Mina, Fabric, Shipit), and, -ad hoc scripts.
+perhaps the most common approach, thousands and thousands of ad hoc scripts.To get a feel for server orchestration, let’s use Ansible. In Section 2.3.1, you saw how to +
To get a feel for server orchestration, let’s use Ansible. In Part 2, you saw how to deploy a single EC2 instance using Ansible. In this post, you’ll first use Ansible to deploy -multiple EC2 instances, and once you have several servers to work with, you’ll be able to see what server orchestration +multiple servers, and once you have several servers to work with, you’ll be able to see what server orchestration looks like in practice.
Head into the fundamentals-of-devops folder you created in Part 1 to work through the - -examples in this blog post series, and create a new subfolder for this blog post and the Ansible - -playbook:
- -$ cd fundamentals-of-devops - -$ mkdir -p ch3/ansible - -$ cd ch3/ansible- -
Inside the ansible folder, create a new playbook called create_ec2_instances_playbook.yml (note the "s" in +
The blog post series’s sample code repo in GitHub contains an Ansible playbook called -"instances," implying multiple instances, unlike the playbook from Part 2), with the +create_ec2_instances_playbook.yml (note the "s" in "instances," implying multiple instances, unlike the playbook from -contents shown in Example 26:
+Part 2) in the ch3/ansible folder that can do the following:- name: Deploy EC2 instances in AWS
-
- hosts: localhost
-
- gather_facts: no
+
- environment:
+
- AWS_REGION: us-east-2
+-
- vars_prompt: # (1)
+
Prompt you for several input variables:
- - name: num_instances
+
- prompt: How many instances to create?
+
- private: false
+-
- - name: base_name
+
num_instances
: How many EC2 instances to create.
- prompt: What to use as the base name for resources?
+
- private: false
+-
- - name: http_port
+
base_name
: What to name all the resources created by this playbook.
- prompt: What port to use for HTTP requests?
+
- private: false
+-
- tasks:
+
http_port
: What port the instances should listen on for HTTP requests.
- - name: Create security group
+
- amazon.aws.ec2_security_group:
+
- name: "{{ base_name }}"
+
- description: "{{ base_name }}"
+
- rules:
+-
- - proto: tcp
+
Create multiple EC2 instances, each with the Ansible
tag set to base_name
.
- ports: ["{{ http_port }}"]
+
- cidr_ip: 0.0.0.0/0
+-
- - proto: tcp
+
Create a security group for the instances which opens up port 22 (for SSH access) and http_port
(for HTTP access).
- ports: [22]
+
- cidr_ip: 0.0.0.0/0
+-
- register: aws_security_group
+
Create an EC2 Key Pair you can use to connect to those instances via SSH.
+
+
- - name: Create a new EC2 key pair
+
- amazon.aws.ec2_key:
+
- name: ansible-ch3 # (2)
+To use this playbook, git clone
the sample code repo, if you haven’t already (if you are new to Git, check out the
- file_name: ansible-ch3.key
+Git tutorial in Part 4):
- no_log: true
+
- register: aws_ec2_key_pair
+
+
+$ git clone {code_samples_repo_clone_url}
- - name: Create EC2 instances with Amazon Linux 2003 AMI
+
- loop: "{{ range(num_instances | int) | list }}" # (3)
+
- amazon.aws.ec2_instance:
+
- name: "{{ '%s-%d' | format(base_name, item) }}" # (4)
+This will check out the sample code into the devops-book folder. Next, head into the
- key_name: "{{ aws_ec2_key_pair.key.name }}"
+fundamentals-of-devops folder you created in Part 1 to work through the examples in this
- instance_type: t2.micro
+blog post series, and create a new ch3/ansible subfolder:
- security_group: "{{ aws_security_group.group_id }}"
+
- image_id: ami-0900fe555666598a2
+
- tags:
+
- Ansible: "{{ base_name }}" # (5)
+$ cd fundamentals-of-devops
-
+$ mkdir -p ch3/ansible
-
+$ cd ch3/ansible
This is similar to the Ansible Playbook you saw in Section 2.3.1, which deployed a - -single EC2 instance, except for the following changes to allow deploying multiple instances:
+Copy create_ec2_instances_playbook.yml from the devops-book folder into ch3/ansible:
In order to make this playbook reusable for creating instances for a variety of use cases, it uses vars_prompt
to
-
-prompt you for several input variables: num_instances
, which is how many EC2 instances to create; base_name
,
-
-which will be used to name all the resources created by this playbook; http_port
, which is the port the instance
-
-should listen on for HTTP requests.
This playbook uses a new name for the EC2 key pair to ensure it doesn’t conflict with key pairs from other - -posts.
- -This playbook uses the loop
keyword to create multiple EC2 instances. loop
takes in a list and loops over
-
-the items in that list, just like a for
-loop in a general purpose programming language. The list this code passes
-
-to loop
is generated by the range(N)
function, which returns the integers from 0 to N
. In this case, N
is
-
-set to num_instances
, which is one of the variables this playbook will prompt you for.
As the loop
keyword iterates through each item in the list, it makes that item available to your code under the
-
-item
keyword. Since the list just contains integers generated by the range
function, that means item
will be
-
-set to the digits 0, 1, 2, and so on. The code uses the format
function to give each EC2 instance a unique name
-
-that includes the base_name
and followed by the digit in item
: so if you enter sample_app_instances
as the
-
-base_name
, the instances will be named sample_app_instances_0
, sample_app_instances_1
,
-
-sample_app_instances_2
, and so on.
Set the Ansible
tag on each instance to the value of base_name
. You will use this in the next section.
$ cp -r ../../devops-book/ch3/ansible/create_ec2_instances_playbook.yml .- +
ansible-playbook
as before. Ansible will start to interactively prompt
-you for the variables in vars_prompt
:
+you for input variables:
You can enter the values interactively and hit Enter, or, alternatively, you can define the variables in a YAML file, -such as the sample-app-vars.yml file shown in Example 27:
+such as the sample-app-vars.yml file shown in Example 26:$ ansible-playbook -v create_ec2_instances_playbook.yml --extra-vars "@sample-app-vars.yml"+
$ ansible-playbook \ + + -v create_ec2_instances_playbook.yml \ + + --extra-vars "@sample-app-vars.yml"
As explained in Watch out for snakes: these are simplified examples for learning, not for production, the code used in the previous blog posts had a +
As explained in Watch out for snakes: these examples have several problems, the code used to deploy apps in the previous -number of concerns related to security and reliability issues: e.g., running the app as a root user, listening on port +blog posts had a number of concerns related to security and reliability issues: e.g., running the app -80, no automatic app restart in case of crashes, and so on. It’s time to fix these issues and get this code a bit +as a root user, listening on port 80, no automatic app restart in case of crashes, and so on. It’s time to fix these -closer to something you could use in production.
+issues and get this code a bit closer to something you could use in production.This file configures the user, private key, and host key checking settings for the sample_app_instances group. Now you +
This file configures the user, private key, and host key checking settings for the sample_app_instances
group. Now you
can use a playbook to configure the servers in this group to run the Node.js sample app. Create a new playbook called
configure_sample_app_playbook.yml, with the contents shown in
-Example 30:
Target the sample_app_instances group you just configured in your inventory.
+Target the sample_app_instances
group you just configured in your inventory.
Instead of a single sample-app role that does everything, as you saw in Section 2.3.1, the +
Instead of a single sample-app
role that does everything, as you saw in Part 2, the
-code in this blog post uses two roles. The first role, called nodejs-app, is responsible for
+code in this blog post uses two roles. The first role, called nodejs-app
, is responsible for
configuring a server to run Node.js apps. You’ll see the code for this role shortly.
The second role is called sample-app, and it’s responsible for running the sample-app. You’ll see the code +
The second role is called sample-app
, and it’s responsible for running the sample app. You’ll see the code
for this role shortly as well.
The sample-app role will be executed as the OS user app-user
, which is a user that the nodejs-app role creates,
+
The sample-app
role will be executed as the OS user app-user
, which is a user that the nodejs-app
role creates,
rather than as the root user.
The nodejs-app role contains just a single file and folder, tasks/main.yml:
+Create just a single file and folder for the nodejs-app
role, tasks/main.yml:
Create tasks/main.yml with the contents shown in Example 31:
+Put the code shown in Example 30 into tasks/main.yml:
nodejs-app
role (ch3/ansible/roles/nodejs-app/tasks/main.yml)# (1)
-
-- name: Add Node packages to yum
+- name: Add Node packages to yum # (1)
shell: curl -fsSL https://rpm.nodesource.com/setup_21.x | bash -
@@ -2158,9 +2050,7 @@ Example: app security a
-# (2)
-
-- name: Create app user
+- name: Create app user # (2)
user:
@@ -2168,9 +2058,7 @@ Example: app security a
-# (3)
-
-- name: Install pm2
+- name: Install pm2 # (3)
npm:
@@ -2214,7 +2102,7 @@ Example: app security a
Create a new OS user called app-user
. This allows you to run your apps with a user with more limited permissions
-than root or ec2-user (who can use sudo
to access root permissions).
+than root.
@@ -2230,7 +2118,7 @@ Example: app security a
-As you can see, the nodejs-app role is fairly generic: it’s designed so you can use it with any Node.js app, which
+
As you can see, the nodejs-app
role is fairly generic: it’s designed so you can use it with any Node.js app, which
makes this a highly reusable piece of code.
@@ -2238,9 +2126,9 @@ Example: app security a
-The sample-app role, on the other hand, is specifically designed to run the sample app. Here’s the folder structure
+
The sample-app
role, on the other hand, is specifically designed to run the sample app. Create two subfolders
-for this role:
+for this role, files and tasks:
@@ -2270,19 +2158,31 @@ Example: app security a
-files/app.js should be the same sample app code you saw in Example 1 and earlier in
+
app.js is the exact same "Hello, World" Node.js sample app you saw in Part 1. Copy it into the
-this post. app.config.js is a new file that is used to configure PM2. So, what is PM2?
+files folder:
+
+
+
+
+
+
+
+$ cp ../../ch1/sample-app/app.js roles/sample-app/files/
+
+
-PM2 is a process supervisor, which is a tool you can use to run your apps, monitor them,
+
app.config.js is a new file that is used to configure PM2. So, what is PM2? PM2 is a
+
+process supervisor, which is a tool you can use to run your apps, monitor them, restart them after a reboot or a
-restart them after a reboot or a crash, manage their logging, and so on. Process supervisors provide one layer of auto
+crash, manage their logging, and so on. Process supervisors provide one layer of auto healing for long-running apps.
-healing for long-running apps. You’ll see other types of auto healing later in this post.
+You’ll see other types of auto healing later in this post.
@@ -2296,13 +2196,13 @@ Example: app security a
it has features designed specifically for Node.js apps. To use these features, create a configuration file called
-app.config.js, as shown in Example 32:
+app.config.js, as shown in Example 31:
-Example 32. PM2 configuration file (ch3/ansible/roles/sample-app/files/app.config.js)
+Example 31. PM2 configuration file (ch3/ansible/roles/sample-app/files/app.config.js)
@@ -2352,7 +2252,7 @@ Example: app security a
-
-
Run app.js as the Node.js app.
+Run app.js to start the app.
@@ -2366,7 +2266,7 @@ Example: app security a
-
-
Use all CPUs available in cluster mode.
+Configure cluster mode to use all CPUs available in cluster mode.
@@ -2384,13 +2284,13 @@ Example: app security a
-Finally, create tasks/main.yml with the contents shown in Example 33:
+Finally, create tasks/main.yml with the contents shown in Example 32:
-Example 33. The sample-app role’s tasks (ch3/ansible/roles/sample-app/tasks/main.yml)
+Example 32. The sample-app
role’s tasks (ch3/ansible/roles/sample-app/tasks/main.yml)
@@ -2458,7 +2358,7 @@ Example: app security a
-These changes address most of the concerns in Watch out for snakes: these are simplified examples for learning, not for production, improving your security posture
+
These changes address most of the concerns in Watch out for snakes: these examples have several problems, improving your security posture
(no more root user) and the reliability and performance of your app (process supervisor, cluster mode).
@@ -2512,7 +2412,7 @@ Example: app security a
-Copy the IP of one of the three servers, open "http://<IP>:8080" in your web browser, and you should see the
+
Copy the IP of one of the three servers, open http://<IP>:8080
in your web browser, and you should see the
familiar "Hello, World!" text once again.
@@ -2520,37 +2420,35 @@ Example: app security a
-While three servers is great for redundancy, it’s not so great for usability. You typically want to give your users
+
While three servers is great for redundancy, it’s not so great for usability, as your users typically want just a
-just a single IP to hit (or better yet, a single domain name, as you’ll see in Part 6). This
-
-requires deploying a load balancer, as described in the next section.
+single endpoint to hit. This requires deploying a load balancer, as described in the next section.
-
+
-Example: load balancing using Ansible and Nginx
+Example: Deploy a Load Balancer Using Ansible and Nginx
A load balancer is a piece of software that can distribute load across multiple servers or apps. You give your users
-a single endpoint to hit, which is the load balancer, and under the hood, the load balancer forwards on requests to
+a single endpoint to hit, which is the load balancer, and under the hood, the load balancer forwards the requests it
-a number of different endpoints, using various algorithms (e.g., round-robin, hash-based, least-response-time, etc.) to
+receives to a number of different endpoints, using various algorithms (e.g., round-robin, hash-based,
-process requests as efficiently as possible. There are many popular load balancer options out there, such as
+least-response-time, etc.) to process requests as efficiently as possible. There are many popular load balancer options
-Apache, Nginx, and HAProxy, as well as
+out there, such as Apache, Nginx, and HAProxy,
-cloud-specific load balancers, such as the AWS Elastic Load Balancers,
+as well as cloud-specific load balancing services, such as AWS Elastic Load
-GCP Cloud Load Balancers, and
+Balancer, GCP Cloud Load Balancer, and
-Azure Load Balancers.
+Azure Load Balancer.
@@ -2570,13 +2468,13 @@ Example: load balancing using Ansible an
paragraphs. If not, you can deploy one more EC2 instance using the same create_ec2_instances_playbook.yml, but with a
-new variables file, nginx-vars.yml, with the contents shown in Example 34:
+new variables file, nginx-vars.yml, with the contents shown in Example 33:
-Example 34. Variables file to create an EC2 instance for nginx (ch3/ansible/nginx-vars.yml)
+Example 33. Variables file to create an EC2 instance for nginx (ch3/ansible/nginx-vars.yml)
@@ -2610,7 +2508,11 @@ Example: load balancing using Ansible an
-$ ansible-playbook -v create_ec2_instances_playbook.yml --extra-vars "@nginx-vars.yml"
+$ ansible-playbook \
+
+ -v create_ec2_instances_playbook.yml \
+
+ --extra-vars "@nginx-vars.yml"
@@ -2622,13 +2524,13 @@ Example: load balancing using Ansible an
nginx_instances
, that will also be the group name in the inventory, so configure the variables for this group by
-creating group_vars/nginx_instances.yml with the contents shown in Example 35:
+creating group_vars/nginx_instances.yml with the contents shown in Example 34:
-Example 35. Configure group variables for your Nginx servers (ch3/ansible/group_vars/nginx_instances.yml)
+Example 34. Configure group variables for your Nginx servers (ch3/ansible/group_vars/nginx_instances.yml)
@@ -2654,13 +2556,13 @@ Example: load balancing using Ansible an
Now you can create a new playbook to configure these servers with Nginx. Create a new file called
-configure_nginx_playbook.yml with the contents shown in Example 36:
+configure_nginx_playbook.yml with the contents shown in Example 35:
-Example 36. Use a role to configure the EC2 instance with Nginx (ch3/ansible/configure_nginx_playbook.yml)
+Example 35. Use a role to configure the EC2 instance with Nginx (ch3/ansible/configure_nginx_playbook.yml)
@@ -2706,7 +2608,7 @@ Example: load balancing using Ansible an
-
-
Configure the servers in that group using a new role called nginx, which is described next.
+Configure the servers in that group using a new role called nginx
, which is described next.
@@ -2716,7 +2618,7 @@ Example: load balancing using Ansible an
-The nginx role has the following folder structure:
+Create a new folder for the nginx
role with tasks and templates subfolders:
@@ -2748,13 +2650,13 @@ Example: load balancing using Ansible an
Inside of nginx/templates/nginx.conf.j2, create an Nginx configuration file template, as shown in
-Example 37:
+Example 36:
-Example 37. Nginx configuration file template (ch3/ansible/roles/nginx/templates/nginx.conf.j2)
+Example 36. Nginx configuration file template (ch3/ansible/roles/nginx/templates/nginx.conf.j2)
@@ -2856,7 +2758,7 @@ Example: load balancing using Ansible an
Use the upstream keyword to define a group of servers that can be referenced elsewhere in this file by the name
-"backend." You’ll see where this is used shortly.
+backend
. You’ll see where this is used shortly.
@@ -2864,15 +2766,15 @@ Example: load balancing using Ansible an
Use Jinja templating syntax to loop over the servers in the
-sample_app_instances group.
+sample_app_instances
group.
-
-
Use Jinja templating syntax to configure the upstream named backend to route traffic to the public address and
+
Use Jinja templating syntax to configure the backend
upstream to route traffic to the public address and
-port 8080 of each server in the sample_app_instances group.
+port 8080 of each server in the sample_app_instances
group.
@@ -2884,7 +2786,7 @@ Example: load balancing using Ansible an
-
-
Configure Nginx as a load balancer, forwarding requests to the / URL to the upstream named backend.
+Configure Nginx as a load balancer, forwarding requests to the / URL to the backend
upstream.
@@ -2902,15 +2804,13 @@ Example: load balancing using Ansible an
-In nginx/tasks/main.yml, configure the tasks for the nginx role with the contents shown in
-
-Example 38:
+Create nginx/tasks/main.yml with the contents shown in Example 37:
-Example 38. Nginx role tasks (ch3/ansible/roles/nginx/tasks/main.yml)
+Example 37. nginx
role tasks (ch3/ansible/roles/nginx/tasks/main.yml)
@@ -2956,7 +2856,7 @@ Example: load balancing using Ansible an
-The tasks in the Nginx role are:
+This file defines the tasks for the nginx
role, which are the following:
@@ -3026,21 +2926,21 @@ Example: load balancing using Ansible an
The value on the left, "xxx.us-east-2.compute.amazonaws.com," is a domain name you can use to access the Nginx server.
-If you open http://xxx.us-east-2.compute.amazonaws.com (this time with no port number, as Nginx is listening on port 80,
+If you open http://xxx.us-east-2.compute.amazonaws.com
(this time with no port number, as Nginx is listening on port
-the default port for HTTP) in your browser, you should see "Hello, World!" yet again. Each time you refresh the page,
+80, the default port for HTTP) in your browser, you should see "Hello, World!" yet again. Each time you refresh the
-Nginx will send that request to a different EC2 instance. Congrats, you now have a single endpoint you can give your
+page, Nginx will send that request to a different EC2 instance. Congrats, you now have a single endpoint you can give
-users, and it will automatically balance the load across multiple servers!
+your users, and that endpoint will automatically balance the load across multiple servers!
-
+
-Example: rolling out updates with Ansible
+Example: Roll Out Updates with Ansible
@@ -3052,13 +2952,13 @@ Example: rolling out updates with Ansib
serving traffic, while others are being updated. With Ansible, the easiest way to have it do a rolling update is to add
-the serial
parameter to configure_sample_app_playbook.yml, as shown in Example 39:
+the serial
parameter to configure_sample_app_playbook.yml, as shown in Example 38:
-Example 39. Use the serial parameter to enable rolling deployment (ch3/ansible/configure_sample_app_playbook.yml)
+Example 38. Use the serial parameter to enable rolling deployment (ch3/ansible/configure_sample_app_playbook.yml)
@@ -3118,13 +3018,13 @@ Example: rolling out updates with Ansib
Let’s give the rolling deployment a shot. Update the text that the app responds with in app.js, as shown in
-Example 40:
+Example 39:
-Example 40. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/ansible/roles/sample-app/files/app.js)
+Example 39. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/ansible/roles/sample-app/files/app.js)
@@ -3194,17 +3094,15 @@ Example: rolling out updates with Ansib
-
-
If you wanted to add a fourth EC2 instance to run your apps, what changes would you have to make to
-
-create_ec2_instances_playbook.yml? What about configure_nginx_playbook.yml?
+Figure out how to scale the number of instances running the sample app from three to four.
-
-
Try restarting one of the EC2 instances using the AWS Console. How does nginx handle it while the instance is
+
Try restarting one of the instances using the AWS Console. How does nginx handle it while the instance is
-rebooting? Does the sample-app still work after the reboot? How does this compare to the behavior you saw in
+rebooting? Does the sample app still work after the reboot? How does this compare to the behavior you saw in
Part 1?
@@ -3212,7 +3110,7 @@ Example: rolling out updates with Ansib
-
-
Try terminating one of the EC2 instances using the AWS Console. How does nginx handle it? How can you restore the
+
Try terminating one of the instances using the AWS Console. How does nginx handle it? How can you restore the
instance?
@@ -3246,11 +3144,9 @@ Example: rolling out updates with Ansib
-
-
-VM orchestration
+VM Orchestration
@@ -3298,11 +3194,11 @@ VM orchestration
servers are all virtual servers, so you can spin up new ones and tear down old ones in minutes. That said, you can also
-use virtualization on-prem, with VMWare as the dominant player in that space. We’ll take a look at a VM orchestration
+use virtualization on-prem with tools from VMWare, Citrix, Microsoft Hyper-V, and son. We’ll take a look at a VM
-example using AWS, but be aware that the basic techniques here apply to most VM orchestration tools, whether in the
+orchestration example using AWS, but be aware that the basic techniques here apply to most VM orchestration tools,
-cloud or on-prem.
+whether in the cloud or on-prem.
@@ -3338,13 +3234,9 @@ VM orchestration
-
-
-Example: VM orchestration using Packer, OpenTofu, and AWS Auto Scaling Groups
-
-To get a feel for VM orchestration, you need three things:
+To get a feel for VM orchestration let’s go through an example. This requires the following three things:
@@ -3362,9 +3254,7 @@ Example: VM orchestration using Packer, OpenTo
-
-
A tool for orchestrating VMs: This blog post series primarily uses AWS, so you’ll use AWS' VM orchestration tool,
-
-Auto Scaling Groups.
+A tool for orchestrating VMs: This blog post series primarily uses AWS, so you’ll use AWS Auto Scaling Groups.
@@ -3388,9 +3278,9 @@ Example: VM orchestration using Packer, OpenTo
-
+
-Example: building VM images using Packer
+Example: Build a VM Image Using Packer
@@ -3450,13 +3340,13 @@ Example: building VM images us
-Example 41 shows the updates to make to the Packer template:
+Example 40 shows the updates to make to the Packer template:
-Example 41. Update the Packer template to use PM2 as a process supervisor and create app-user
(ch3/packer/sample-app.pkr.hcl)
+Example 40. Update the Packer template to use PM2 as a process supervisor and create app-user
(ch3/packer/sample-app.pkr.hcl)
@@ -3518,11 +3408,11 @@ Example: building VM images us
-The main changes are to make similar security and reliability improvements to the ones in the server orchestration
+
The main changes are to make security and reliability improvements similar to the ones you did in the server
-section: that is, use PM2 as a process supervisor and create app-user
to run the app (instead of using the root
+orchestration section: that is, use PM2 as a process supervisor and create app-user
to run the app (instead of using
-user).
+the root user).
@@ -3588,7 +3478,7 @@ Example: building VM images us
-When the build is done, Packer will output the ID of the newly created AMI. Make sure to jot this down somewhere, as
+
When the build is done, Packer will output the ID of the newly created AMI. Make sure to jot this ID down somewhere, as
you’ll need it shortly.
@@ -3596,9 +3486,9 @@ Example: building VM images us
-
+
-Example: deploying VM images in AWS using OpenTofu and Auto Scaling Groups
+Example: Deploy a VM Image in an Auto Scaling Group Using OpenTofu
@@ -3630,7 +3520,7 @@
-
Let’s use a reusable module (as introduced in Section 2.5.3) called asg
from this blog post series’s
+
Let’s use a reusable OpenTOfu module called asg
from this blog post series’s
sample code repo to deploy an ASG. You can find the module in the ch3/tofu/modules/asg
@@ -3754,13 +3644,13 @@
Example 42:
+Example 41:
-Example 42. Configure the asg
module (ch3/tofu/live/asg-sample/main.tf)
+Example 41. Configure the asg
module (ch3/tofu/live/asg-sample/main.tf)
@@ -3842,7 +3732,7 @@ Example 43.
+Example 42.
@@ -3882,13 +3772,13 @@
-
Create a file called user-data.sh with the contents shown in Example 43:
+Create a file called user-data.sh with the contents shown in Example 42:
-Example 43. The user data script for each EC2 instance, which uses PM2 to start the sample-app (ch3/tofu/live/asg-sample/user-data.sh)
+Example 42. The user data script for each EC2 instance, which uses PM2 to start the sample app (ch3/tofu/live/asg-sample/user-data.sh)
@@ -3970,15 +3860,15 @@
+
-Example: load balancing using OpenTofu and AWS
+Example: Deploy an Application Load Balancer Using OpenTofu
In the server orchestration section, you deployed your own load balancer using Nginx. This was a very simplified
-deployment that works fine for an example, but has a number of drawbacks if you try to use it for production apps:
+deployment that worked fine for an example, but had a number of drawbacks if you tried to use it for production apps:
@@ -4048,11 +3938,11 @@ Example: load balancing using OpenTofu and A
and as I mentioned before, almost every cloud provider offers a managed service for load balancing, such as
-AWS Elastic Load Balancers,
+AWS Elastic Load Balancer,
-GCP Cloud Load Balancers, and
+GCP Cloud Load Balancer, and
-Azure Load Balancers. All of these
+Azure Load Balancer. All of these
provide a number of powerful features out-of-the-box. For example, the AWS Elastic Load Balancer (ELB) gives you the
@@ -4068,7 +3958,7 @@ Example: load balancing using OpenTofu and A
-
-
under the hood, AWS automatically deploys multiple servers for an ELB so you don’t get an outage
+
Under the hood, AWS automatically deploys multiple servers for an ELB so you don’t get an outage
if one server crashes.
@@ -4098,9 +3988,7 @@ Example: load balancing using OpenTofu and A
AWS load balancers are hardened against a variety of attacks, including meeting the requirements of a
-variety of security standards (e.g., SOC 2, ISO 27001, HIPAA, PCI, FedRAMP) out-of-the-box (see
-
-AWS Services in Scope by Compliance Program).
+variety of security standards (e.g., SOC 2, ISO 27001, HIPAA, PCI, FedRAMP) out-of-the-box.[12]
@@ -4130,6 +4018,18 @@ Example: load balancing using OpenTofu and A
+
+
+
+
+
+
+
+
+Figure 16. An ALB consists of listeners, listener rules, and target groups.
+
+
+
@@ -4170,18 +4070,6 @@ Example: load balancing using OpenTofu and A
-
-
-
-
-
-
-
-
-Figure 16. An ALB consists of listeners, listener rules, and target groups.
-
-
-
The blog post series’s sample code repo includes a module called alb
in the ch3/tofu/modules/alb folder that you
@@ -4196,13 +4084,13 @@
Example: load balancing using OpenTofu and A
-Example 44 shows how to update the asg-sample
module to use the alb
module:
+Example 43 shows how to update the asg-sample
module to use the alb
module:
-Example 44. Configure the alb
module (ch3/tofu/live/asg-sample/main.tf)
+Example 43. Configure the alb
module (ch3/tofu/live/asg-sample/main.tf)
@@ -4280,13 +4168,13 @@ Example: load balancing using OpenTofu and A
to send traffic to (which instances to put in its target group)? To tie these pieces together, go back to your usage
-of the asg
module, and update it with one parameter, as shown in Example 45:
+of the asg
module, and update it with one parameter, as shown in Example 44:
-Example 45. Configure the asg
module (ch3/tofu/live/asg-sample/main.tf)
+Example 44. Configure the asg
module (ch3/tofu/live/asg-sample/main.tf)
@@ -4360,13 +4248,13 @@ Example: load balancing using OpenTofu and A
The final change to the asg-sample
module is to add the load balancer’s domain name as an output variable in
-outputs.tf, as shown in Example 46:
+outputs.tf, as shown in Example 45:
-Example 46. Output the ALB domain name (ch3/tofu/live/asg-sample/outputs.tf)
+Example 45. Output the ALB domain name (ch3/tofu/live/asg-sample/outputs.tf)
@@ -4444,9 +4332,9 @@ Example: load balancing using OpenTofu and A
-
+
-Example: rolling out updates with OpenTofu and Auto Scaling Groups
+Example: Roll Out Updates with OpenTofu and Auto Scaling Groups
@@ -4458,7 +4346,7 @@ Example: rolling out updates with OpenTo
instance refresh, which can update
-your instances automatically by doing a rolling deployment. Example 47 shows how to enable
+your instances automatically by doing a rolling deployment. Example 46 shows how to enable
instance refresh in the asg
module:
@@ -4466,7 +4354,7 @@ Example: rolling out updates with OpenTo
-Example 47. Enable instance refresh for the ASG (ch3/tofu/live/asg-sample/main.tf)
+Example 46. Enable instance refresh for the ASG (ch3/tofu/live/asg-sample/main.tf)
@@ -4556,13 +4444,13 @@ Example: rolling out updates with OpenTo
For example, update app.js in the packer folder to respond with "Fundamentals of DevOps!", as shown in
-Example 48:
+Example 47:
-Example 48. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/packer/app.js)
+Example 47. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/packer/app.js)
@@ -4646,9 +4534,9 @@ Example: rolling out updates with OpenTo
~ resource "aws_launch_template" "sample_app" {
- ~ image_id = "ami-0f5b3d9c244e6026d" -> "ami-0d68b7b6546331281"
+ ~ image_id = "ami-0f5b3d9c244e6026d" -> "ami-0d68b7b6546331281"
- ~ latest_version = 1 -> (known after apply)
+ ~ latest_version = 1 -> (known after apply)
# (10 unchanged attributes hidden)
@@ -4816,17 +4704,17 @@ Example: rolling out updates with OpenTo
-
-
If you wanted to add a fourth EC2 instance to run your apps, what changes would you have to make to
+
Figure out how to scale the number of instances in the ASG running the sample app from three to four.
-the OpenTofu code? How does this compare to adding a fourth EC2 instance to the Ansible code?
+How does this compare to adding a fourth instance to the Ansible code?
-
-
Try restarting one of the EC2 instances using the AWS Console. How does the ALB handle it while the instance is
+
Try restarting one of the instances using the AWS Console. How does the ALB handle it while the instance is
-rebooting? Does the sample-app still work after the reboot? How does this compare to the behavior you saw when
+rebooting? Does the sample app still work after the reboot? How does this compare to the behavior you saw when
restarting an instance with Ansible?
@@ -4834,7 +4722,7 @@ Example: rolling out updates with OpenTo
-
-
Try terminating one of the EC2 instances using the AWS Console. How does the ALB handle it? Do you need to do
+
Try terminating one of the instances using the AWS Console. How does the ALB handle it? Do you need to do
anything to restore the instance?
@@ -4864,17 +4752,13 @@ Example: rolling out updates with OpenTo
-
-
-Container orchestration
+Container Orchestration
-I first mentioned containers back in Section 2.4, introducing them as essentially a lightweight
-
-alternative to VMs. The idea with container orchestration is to do the following:
+The idea with container orchestration is to do the following:
@@ -5034,10 +4918,6 @@ Container orchestration
-
-
-An example of container orchestration
-
There are many container tools out there, including Docker,
@@ -5054,9 +4934,9 @@
An example of container orchestr
Mesos, and OpenShift.
-The most popular by far are Docker and Kubernetes—so much so their names are nearly synonymous with containers and
+The most popular, by far, are Docker and Kubernetes—so much so their names are nearly synonymous with containers and
-container orchestration—so that’s what we’ll focus on in this blog post series.
+container orchestration, respectively—so that’s what we’ll focus on in this blog post series.
@@ -5068,9 +4948,9 @@ An example of container orchestr
-
+
-Example: a crash course on Docker
+Example: A Crash Course on Docker
@@ -5164,7 +5044,7 @@ Example: a crash course on Docker
-How did this happen? Well, first, Docker searches your local filesystem for the ubuntu:20.04
image. If you don’t
+
How did this happen? Well, first, Docker searches your local filesystem for the ubuntu:24.04
image. If you don’t
have that image downloaded already, Docker downloads it automatically from Docker Hub, which
@@ -5252,11 +5132,11 @@
Example: a crash course on Docker
-Next, exit the container by hitting Ctrl-D on Windows and Linux or Cmd-D on macOS, and you should be back in your
+
Next, exit the container by hitting Ctrl-D, and you should be back in your original command prompt on your underlying
-original command prompt on your underlying host OS. If you try to look for the test.txt file you just wrote, you’ll
+host OS. If you try to look for the test.txt file you just wrote, you’ll see that it doesn’t exist: the container’s
-see that it doesn’t exist: the container’s filesystem is totally isolated from your host OS.
+filesystem is totally isolated from your host OS.
@@ -5302,7 +5182,7 @@ Example: a crash course on Docker
-Hit Ctrl-D or Cmd-D again to exit the container, and back on your host OS, run the docker ps -a
command:
+Hit Ctrl-D again to exit the container, and back on your host OS, run the docker ps -a
command:
@@ -5368,7 +5248,7 @@ Example: a crash course on Docker
Let’s now see how a container can be used to run a web app: in particular, the Node.js sample app you’ve been using
-throughout this blog post series. Hit Ctrl-D or Cmd-D again to exit the container, and back on your host OS, create a
+throughout this blog post series. Hit Ctrl-D to exit the container, and back on your host OS, create a
new folder called docker:
@@ -5390,9 +5270,9 @@ Example: a crash course on Docker
-You should also copy app.js (note: you do not need to copy app.config.js this time) from the server orchestration
+
Copy app.js from the server orchestration section into the docker folder (note: you do not need to copy
-section into the docker folder:
+app.config.js this time):
@@ -5408,13 +5288,13 @@ Example: a crash course on Docker
-In the docker folder, create a file called Dockerfile, with the contents shown in Example 49:
+Next, create a file called Dockerfile, with the contents shown in Example 48:
-Example 49. Dockerfile for the Node.js sample-app (ch3/docker/Dockerfile)
+Example 48. Dockerfile for the Node.js sample-app (ch3/docker/Dockerfile)
@@ -5494,7 +5374,7 @@ Example: a crash course on Docker
-
-
The COPY
command copies app.js into the Docker image.
+Copy app.js into the Docker image.
@@ -5622,11 +5502,11 @@ Example: a crash course on Docker
-First, hit Ctrl-C to shut down the sample-app container: note that it’s C this time, not D, and it’s Ctrl
+
First, hit Ctrl-C to shut down the sample-app
container: note that it’s Ctrl-C this time, not Ctrl-D, as you’re
-regardless of OS, as you’re shutting down a process, rather than exiting an interactive prompt. Now rerun the
+shutting down a process, rather than exiting an interactive prompt. Now rerun the container but this time with the
-container but this time with the -p
flag as follows:
+-p
flag as follows:
@@ -5714,9 +5594,9 @@ Example: a crash course on Docker
-
+
-Example: a crash course on Kubernetes
+Example: Deploy a Dockerized App with Kubernetes
@@ -5898,7 +5778,7 @@ Example: a crash course on Kubernetes
Kubernetes Deployment, which is a declarative way to manage an application in Kubernetes. The Deployment allows you
-to declare what Docker images to run, how many copies of them to run (called replicas), a variety of settings for
+to declare what Docker images to run, how many copies of them to run (replicas), a variety of settings for
those images (e.g., CPU, memory, port numbers, environment variables), and so on, and the Deployment will then work to
@@ -5908,7 +5788,7 @@ Example: a crash course on Kubernetes
-One way to interact with Kubernetes is to create YAML files describing what you want and to use the
+
One way to interact with Kubernetes is to create YAML files to define your Kubernetes objects, and to use the
kubectl apply
command to submit those objects to the cluster. Create a new folder called kubernetes to store these
@@ -5934,13 +5814,13 @@
Example: a crash course on Kubernetes
Within the kubernetes folder, create a file called sample-app-deployment.yml with the contents shown in
-Example 50:
+Example 49:
-Example 50. The YAML for a Kubernetes Deployment (ch3/kubernetes/sample-app-deployment.yml)
+Example 49. The YAML for a Kubernetes Deployment (ch3/kubernetes/sample-app-deployment.yml)
@@ -6050,7 +5930,7 @@ Example: a crash course on Kubernetes
-
-
Templates can be used separate from Deployments, so they have separate metadata which allows you to identify and
+
Templates can be used separately from Deployments, so they have separate metadata which allows you to identify and
target that template in API calls (this is another example of Kubernetes trying to be highly flexible and
@@ -6134,7 +6014,7 @@
Example: a crash course on Kubernetes
This command should complete very quickly. How do you know if it actually worked? To answer that question, you can
-use kubectl
to explore your cluster. First, run the get deployments
command:
+use kubectl
to explore your cluster. First, run the get deployments
command, and you should see your Deployment:
@@ -6154,7 +6034,7 @@ Example: a crash course on Kubernetes
-Here, you can see how Kubernetes uses metadata, as the name of the deployment (sample-app-deployment) comes from your
+
Here, you can see how Kubernetes uses metadata, as the name of the Deployment (sample-app-deployment) comes from your
metadata
block. You can use that metadata in API calls yourself. For example, to get more details about a specific
@@ -6174,7 +6054,7 @@
Example: a crash course on Kubernetes
Selector: app=sample-app-pods
-Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
+Replicas: 3 desired | 3 updated | 3 total | 3 available
StrategyType: RollingUpdate
@@ -6296,9 +6176,9 @@ Example: a crash course on Kubernetes
-
+
-Example: load balancing with Kubernetes
+Example: Deploy a Load Balancer with Kubernetes
@@ -6306,7 +6186,7 @@ Example: load balancing with Ku
object, called a Kubernetes Service, which is a way to expose an app running in Kubernetes as a service you can
-talk to over the network. Example 51 shows the YAML code for a Kubernetes service, which you
+talk to over the network. Example 50 shows the YAML code for a Kubernetes service, which you
should put in a file called sample-app-service.yml:
@@ -6314,7 +6194,7 @@ Example: load balancing with Ku
-Example 51. The YAML for a Kubernetes Service (ch3/kubernetes/sample-app-service.yml)
+Example 50. The YAML for a Kubernetes Service (ch3/kubernetes/sample-app-service.yml)
@@ -6444,11 +6324,11 @@ Example: load balancing with Ku
$ kubectl get services
-NAME TYPE CLUSTER-IP ExTERNAL-IP PORT(S)
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
-kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
+kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
-sample-app-loadbalancer LoadBalancer 10.111.250.210 localhost 80:30910/TCP
+sample-app-loadbalancer LoadBalancer 10.111.250.21 localhost 80:30910/TCP
@@ -6510,29 +6390,29 @@ Example: load balancing with Ku
-Congrats! You’re now able to deploy Docker containers with Kubernetes and distribute traffic across your containers
+
Congrats, you’re now able to deploy Docker containers with Kubernetes and distribute traffic across your containers
-with a load balancer. But what if you want to update your app?
+with a load balancer! But what if you want to update your app?
-
+
-Example: rolling out updates with Kubernetes
+Example: Roll Out Updates with Kubernetes
Kubernetes Deployments have built-in support for rolling updates. Open up sample-app-deployment.yml and add the
-code shown in Example 52 to the bottom of the spec
section:
+code shown in Example 51 to the bottom of the spec
section:
-Example 52. The YAML for doing rolling updates (ch3/kubernetes/sample-app-deployment.yml)
+Example 51. The YAML for doing rolling updates (ch3/kubernetes/sample-app-deployment.yml)
@@ -6590,13 +6470,13 @@ Example: rolling out updat
Now, make a change to the sample app in docker/app.js, such as returning the text "Fundamentals of DevOps!" instead of
-"Hello, World!", as shown in Example 53:
+"Hello, World!", as shown in Example 52:
-Example 53. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/docker/app.js)
+Example 52. Update the app to respond with the text "Fundamentals of DevOps!" (ch3/docker/app.js)
@@ -6642,13 +6522,13 @@ Example: rolling out updat
Next, open sample-app-deployment.yml one more time, and in the spec
section, update the image
-from sample-app:v1
to sample-app:v2
, as shown in Example 54:
+from sample-app:v1
to sample-app:v2
, as shown in Example 53:
-Example 54. Update the Deployment to use the v2 image (ch3/kubernetes/sample-app-deployment.yml)
+Example 53. Update the Deployment to use the v2 image (ch3/kubernetes/sample-app-deployment.yml)
@@ -6836,9 +6716,9 @@ Example: rolling out updat
-
+
-Example: deploying a Kubernetes cluster in AWS
+Example: Deploy a Kubernetes Cluster in AWS Using EKS
@@ -6936,13 +6816,13 @@ Example: deploying a Kubernetes cluster i
Inside of the eks-sample folder, create a file called main.tf, with the contents shown in
-Example 55:
+Example 54:
-Example 55. Configure the eks-cluster
module (ch3/tofu/live/eks-sample/main.tf)
+Example 54. Configure the eks-cluster
module (ch3/tofu/live/eks-sample/main.tf)
@@ -7080,9 +6960,7 @@ Example: deploying a Kubernetes cluster i
Where <REGION>
is the AWS region you deployed the EKS cluster into and <CLUSTER_NAME>
is the name of the EKS
-cluster. If you deployed the eks-cluster
module with default settings, these are us-east-2
and eks-tofu
,
-
-respectively, so you can run the following:
+cluster. The preceding code used us-east-2
and eks-tofu
for these, respectively, so you can run the following:
@@ -7142,9 +7020,9 @@ Example: deploying a Kubernetes cluster i
-
+
-Example: pushing Docker images to ECR
+Example: Push a Docker Image to ECR
@@ -7196,13 +7074,13 @@ Example: pushing Docker images to
-In the ecr-sample folder, create a file called main.tf with the contents shown in Example 56:
+In the ecr-sample folder, create a file called main.tf with the contents shown in Example 55:
-Example 56. Configure the ecr-repo
module (ch3/tofu/live/ecr-sample/main.tf)
+Example 55. Configure the ecr-repo
module (ch3/tofu/live/ecr-sample/main.tf)
@@ -7244,13 +7122,13 @@ Example: pushing Docker images to
-You should also create outputs.tf with an output variable, as shown in Example 57:
+You should also create outputs.tf with an output variable, as shown in Example 56:
-Example 57. The ecr-sample
module output variables (ch3/tofu/live/ecr-sample/outputs.tf)
+Example 56. The ecr-sample
module output variables (ch3/tofu/live/ecr-sample/outputs.tf)
@@ -7410,7 +7288,7 @@ Example: pushing Docker images to
sample-app:v3 \
- 111111111111.dkr.ecr.us-east-2.amazonaws.com/sample-app:v3
+ <YOUR_ECR_REPO_URL>:v3
@@ -7420,7 +7298,9 @@ Example: pushing Docker images to
Next, you need to authenticate to your ECR repo, which you can do using a combination of the aws
CLI and the docker
-CLI:
+CLI, making sure to replace the last argument with the registry URL of your own ECR repo that you got from the
+
+registry_url
output::
@@ -7440,7 +7320,7 @@ Example: pushing Docker images to
--password-stdin \
- 111111111111.dkr.ecr.us-east-2.amazonaws.com/sample-app
+ <YOUR_ECR_REPO_URL>
@@ -7456,7 +7336,7 @@ Example: pushing Docker images to
-$ docker push 111111111111.dkr.ecr.us-east-2.amazonaws.com/sample-app:v3
+$ docker push <YOUR_ECR_REPO_URL>:v3
@@ -7472,9 +7352,9 @@ Example: pushing Docker images to
-
+
-Example: deploying apps into EKS
+Example: Deploy a Dockerized App into an EKS Cluster
@@ -7482,13 +7362,13 @@ Example: deploying apps into EKS
make to the YAML you used to deploy locally is to switch the image
in kubernetes/sample-app-deployment.yml to the
-v3
ECR repo URL, as shown in Example 58:
+v3
ECR repo URL, as shown in Example 57:
-Example 58. Update the Deployment to use the Docker image from your ECR repo (ch3/kubernetes/sample-app-deployment.yml)
+Example 57. Update the Deployment to use the Docker image from your ECR repo (ch3/kubernetes/sample-app-deployment.yml)
@@ -7510,7 +7390,7 @@ Example: deploying apps into EKS
- name: sample-app
- image: 111111111111.dkr.ecr.us-east-2.amazonaws.com/sample-app:v3
+ image: <YOUR_ECR_REPO_URL>:v3
@@ -7574,11 +7454,11 @@ Example: deploying apps into EKS
-NAME TYPE ExTERNAL-IP PORT(S)
+NAME TYPE EXTERNAL-IP PORT(S)
-kubernetes ClusterIP <none> 443/TCP
+kubernetes ClusterIP <none> 443/TCP
-sample-app-loadbalancer LoadBalancer xxx.us-east-2.elb.amazonaws.com 80:32254/TCP
+sample-app-loadbalancer LoadBalancer xx.us-east-2.elb.amazonaws.com 80:3225/TCP
@@ -7596,7 +7476,7 @@ Example: deploying apps into EKS
-$ curl xxx.us-east-2.elb.amazonaws.com
+$ curl xx.us-east-2.elb.amazonaws.com
Fundamentals of DevOps!
@@ -7662,7 +7542,7 @@ Example: deploying apps into EKS
-
-
Try terminating one of the worker node EC2 instances using the AWS Console. How does the ELB handle it? How does EKS
+
Try terminating one of the worker node instances using the AWS Console. How does the ELB handle it? How does EKS
respond? Do you need to do anything to restore the instance or your containers?
@@ -7700,11 +7580,9 @@ Example: deploying apps into EKS
-
-
-Serverless orchestration
+Serverless Orchestration
@@ -7902,15 +7780,13 @@ Serverless orchestration
cold starts, where on the first run, or the first run after a period of idleness, the serverless provider needs
-to download your deployment package and run it, which can take a few seconds: this is plenty fast for a deployment, but
-
-for some use cases, such as responding to live HTTP requests, it can be unacceptably slow. FaaS in particular also
+to download your deployment package and run it, which can take up to several seconds, which for some use cases, such as
-struggles with use cases that require long-running connections, such as database connection pools or WebSockets: there
+responding to live HTTP requests, is unacceptably slow. FaaS also struggles with use cases that require long-running
-are solutions, but they are typically considerably more complicated than using long-running connections with other
+connections, such as database connection pools or WebSockets: there are solutions, but they are typically considerably
-orchestration approaches.
+more complicated than using long-running connections with other orchestration approaches.
@@ -7948,9 +7824,9 @@ Serverless orchestration
allowed you to deploy web apps without having to think about servers or clusters. However, this required that the
-apps were written in very specific ways, with even more limitations than Lambda: e.g., specific languages,
+apps were written in very specific ways: e.g., specific languages, frameworks, data stores, runtime limits, data
-frameworks, data stores, runtime limits, data access patterns, etc.
+access patterns, etc.
@@ -8000,10 +7876,6 @@ Serverless orchestration
-
-
-An example of serverless orchestration
-
To get a feel for serverless, let’s try out what is arguably the most popular approach, which is AWS Lambda and FaaS.
@@ -8014,17 +7886,47 @@
An example of serverless orchestration
-
+
-Example: serverless functions with AWS Lambda
+Example: Deploy a Serverless Function with AWS Lambda
The blog post series’s sample code repo includes a module called lambda
in the ch3/tofu/modules/lambda folder that
-can deploy a serverless function using AWS Lambda. To use the lambda
module, create a live/lambda-sample folder to
+can do the following:
+
+
+
+
+
+
+
+-
+
+
Zip up a folder you specify into a deployment package.
+
+
+
+-
+
+
Upload the deployment package as an AWS Lambda function.
+
+
+
+-
+
+
Configure various settings for the Lambda function, such as memory, CPU, and environment variables.
+
+
+
+
+
+
+
+
-use as a root module:
+To use the lambda
module, create a live/lambda-sample folder to use as a root module:
@@ -8044,13 +7946,13 @@ Example: serverless funct
-In the lambda-sample folder, create a file called main.tf with the contents shown in Example 59:
+In the lambda-sample folder, create a file called main.tf with the contents shown in Example 58:
-Example 59. Configure the lambda
module (ch3/tofu/live/lambda-sample/main.tf)
+Example 58. Configure the lambda
module (ch3/tofu/live/lambda-sample/main.tf)
@@ -8126,7 +8028,7 @@ Example: serverless funct
src_dir
: The directory which contains the code for the Lambda function. The lambda
module will zip this folder
-up into a deployment package. Example 60 shows the contents of this folder.
+up into a deployment package. Example 59 shows the contents of this folder.
@@ -8134,9 +8036,7 @@ Example: serverless funct
runtime
: The runtime used by this function. AWS Lambda supports runtimes such as Node.js, Python, Java, Ruby,
-and .NET, as well as the ability to create custom runtimes for all other languages (see the
-
-Lambda runtimes documentation for details).
+and .NET, as well as the ability to create custom runtimes for all other languages.[17]
@@ -8148,7 +8048,7 @@ Example: serverless funct
Lambda will pass this function the event information. The preceding code sets the handler to the handler
function
-in index.js, which is shown in Example 60.
+in index.js, which is shown in Example 59.
@@ -8182,13 +8082,13 @@ Example: serverless funct
Create a folder in lambda-sample/src, and inside that folder, create a file called index.js, which defines the
-handler, as shown in Example 60:
+handler, as shown in Example 59:
-Example 60. The handler code in index.js (ch3/tofu/live/lambda-sample/src/index.js)
+Example 59. The handler code in index.js (ch3/tofu/live/lambda-sample/src/index.js)
@@ -8298,9 +8198,9 @@ Example: serverless funct
-
+
-Example: triggering Lambda functions with HTTP requests using API Gateway
+Example: Deploy an API Gateway in Front of AWS Lambda
@@ -8336,13 +8236,13 @@ Example 61 shows how to update the lambda-sample
module to use the api-gateway
module:
+Example 60 shows how to update the lambda-sample
module to use the api-gateway
module:
-Example 61. Configure the api-gateway
module to trigger the Lambda function (ch3/tofu/live/lambda-sample/main.tf)
+Example 60. Configure the api-gateway
module to trigger the Lambda function (ch3/tofu/live/lambda-sample/main.tf)
@@ -8390,7 +8290,9 @@
-
You should also add an output variable in outputs.tf, as shown in Example 62:
+You should also add an output variable in outputs.tf, as shown in Example 61:
-Example 62. The lambda-sample
module’s outputs (ch3/tofu/live/lambda-sample/outputs.tf)
+Example 61. The lambda-sample
module’s outputs (ch3/tofu/live/lambda-sample/outputs.tf)
@@ -8452,7 +8354,9 @@
-$ tofu apply
+$ tofu init
+
+$ tofu apply
@@ -8484,19 +8388,19 @@
-
Open this output in a web browser, and you should see "Hello, World!" API Gateway is now routing requests to your Lambda
+
Open this output in a web browser, and you should see "Hello, World!" Congrats, API Gateway is now routing requests to
-function. As load goes up and down, AWS will automatically scale your Lambda functions up and down, and API Gateway
+your Lambda function! As load goes up and down, AWS will automatically scale your Lambda functions up and down, and
-will automatically distribute traffic across these functions.
+API Gateway will automatically distribute traffic across these functions.
-
+
-Example: rolling out updates with Lambda
+Example: Roll Out Updates with AWS Lambda
@@ -8510,13 +8414,13 @@ Example: rolling out updates w
For example, try updating lambda-sample/src/index.js to respond with "Fundamentals of DevOps!" rather than
-"Hello, World!", as shown in Example 63:
+"Hello, World!", as shown in Example 62:
-Example 63. Update the Lambda function response text (ch3/tofu/live/lambda-sample/src/index.js)
+Example 62. Update the Lambda function response text (ch3/tofu/live/lambda-sample/src/index.js)
@@ -8558,7 +8462,7 @@ Example: rolling out updates w
apply
should complete in a few seconds, and if you retry the api_endpoint
URL, you’ll see "Fundamentals of DevOps!"
-right away. So again, deployments with Lambda are fast! In fact, AWS Lambda does effectively an instantanesous
+right away. So again, deployments with Lambda are fast! In fact, AWS Lambda does an instantaneous
switchover from the old to the new version, so it’s effectively a blue-green deployment (which you’ll learn more about
@@ -8644,11 +8548,9 @@
Example: rolling out updates w
-
-
-Comparison of orchestration options
+Comparing of Orchestration Options
@@ -8658,7 +8560,7 @@ Comparison of orchestration optionsSection 3.1:
+blog post:
@@ -8686,7 +8588,7 @@ Comparison of orchestration options
+variation within a category inevitably gets lost.
@@ -8742,27 +8644,41 @@ Comparison of orchestration options
Manual
-Manually specify which servers should run which apps. Limited deployment strategies: e.g., Ansible rolling
-
-deployments.
Manually specify which servers should run which apps.
Supported
-Define a template and the orchestrator spins up servers from that template. Limited deployment +
Define a template and the orchestrator spins up servers from that template.
Strong support
+ +Set up worker nodes, define a template, and the orchestrator schedules containers on the worker nodes.
Strong support
-Set up worker nodes, define a template, and the orchestrator schedules containers on the worker nodes. Multiple +
Upload a deployment package and let the orchestration tool run it whenever it is triggered.
Update strategies
Supported
+ +Limited strategies: e.g., Ansible rolling deployments.
Supported
+ +Limited strategies: e.g., ASG rolling deployments.
Strong support
-Upload a deployment package and let the orchestration tool run it whenever it is triggered. Multiple deployment +
Multiple strategies: e.g., rolling, canary, blue-green.[18]
Strong support
-strategies: e.g., blue-green, canary, traffic shifting.[18]Multiple strategies: e.g., blue-green, canary, traffic shifting.[19]
Strong support
-A scheduler decides which containers run where. As an end-user, you get to run multiple containers per server.
+A scheduler decides which containers run where. As an end-user, you see (and pay for) servers, but you get to run multiple containers per server.
Strong support
@@ -8834,7 +8750,7 @@Strong support
@@ -8872,7 +8788,7 @@Supported
-E.g., create an OpenTofu module that exposes variables to configure ASGs for different environments.
E.g., Create an OpenTofu module that exposes variables to configure ASGs for different environments.
Strong support
@@ -8900,7 +8816,7 @@Supported
-E.g., use Ansible Vault to encrypt and manage +
E.g., Use Ansible Vault to encrypt and manage sensitive data.
Strong support
-E.g., use Kubernetes Services with Kubernetes Deployments.
+E.g., Use Kubernetes Services with Kubernetes Deployments.
Strong support
-E.g., use API Gateway to trigger Lambda functions in response to HTTP requests.
E.g., Use API Gateway to trigger Lambda functions in response to HTTP requests.
@@ -8952,23 +8868,23 @@Manual
-E.g., have Ansible pass the IP addresses of servers in its inventory to your apps.
+E.g., Have Ansible pass the IP addresses of servers in its inventory to your apps.
Manual
-E.g., you can use load balancers between ASGs, using AWS APIs to discovery load balancer URLs.
E.g., You can use load balancers between ASGs, using AWS APIs to discover load balancer URLs.
Strong support
-E.g., use a Kubernetes Service to expose your app on a private IP within the cluster, and then discover IPs +
E.g., Use a Kubernetes Service to expose your app on a private IP within the cluster, and then discover IPs -using environment variables or DNS.[21]
Strong support
E.g., Lambda functions can trigger other Lambda functions either directly via API calls or indirectly via -events.[22]
Supported
-Ephemeral disks are typically supported, but permanent disks have to be managed manually.[23]
+Ephemeral disks are typically supported, but permanent disks have to be managed manually.[24]
Strong support
@@ -8992,7 +8908,7 @@Not supported
-E.g., the file system for Lambda functions is read-only. If you need to store data, you must use an external data store.
E.g., The file system for Lambda functions is read-only. If you need to store data, you must use an external data store.
@@ -9080,7 +8996,7 @@Weak
-You have to maintain the servers, the OS and tools on each server, and the orchestration tool itself.[24]
+You have to maintain the servers, the OS and tools on each server, and the orchestration tool itself.[25]
Moderate
@@ -9090,7 +9006,7 @@Very strong
@@ -9140,7 +9056,7 @@Very strong
-It’s very common to run serverless apps in your local dev environment.[26]
It’s very common to run serverless apps in your local dev environment.[27]
@@ -9180,7 +9096,7 @@Very strong
@@ -9188,7 +9104,7 @@Weak
-Full access to the servers, sometimes full access to the containers[28], and immutable container images make debugging easier, but multiple layers of abstraction, and +
Full access to the servers, sometimes full access to the containers[29], and immutable container images make debugging easier, but multiple layers of abstraction, and the complexity of orchestration tools make debugging challenging.
Weak
-Limits on runtimes and numerous hoops to jump through for long-running connections.[29]
+Limits on runtimes and numerous hoops to jump through for long-running connections.[30]
@@ -9371,7 +9287,7 @@Before staging these files, you should create a new file in the root of the repo called .gitignore, with the contents -shown in Example 64:
+shown in Example 63:Having your code reviewed by someone else is a highly effective way to catch bugs, reducing defect rates by as much as -50-80%.[31] Code reviews +50-80%.[32] Code reviews are also an efficient mechanism to spread knowledge, culture, training, and a sense of ownership throughout the team.
@@ -3496,11 +3496,11 @@By default, Git allows you to set your name and email address to any value you want, as shown in -Example 65:
+Example 64:For now, update the scripts
block with a start
command, which will define how to start your app, as shown in
-Example 67:
Of course, start
isn’t the only command you would add. The idea would be to add all the common operations on your
-project to the build. For example, in Section 3.4.1.1, you created a Dockerfile to package the app as
+project to the build. For example, in Section 3.4.1, you created a Dockerfile to package the app as
a Docker image, and in order to build that Docker image for multiple CPU architectures (e.g., ARM64, AMD64), you had to
@@ -3904,13 +3904,13 @@
First, copy the Dockerfile shown in Example 68 into the sample-app folder:
+First, copy the Dockerfile shown in Example 67 into the sample-app folder:
This is identical to the Dockerfile you saw in Section 3.4.1.1, except for two changes:
+This is identical to the Dockerfile you saw in Section 3.4.1, except for two changes:
Next, add a dockerize
command to the scripts
block in package.json, as shown in
-Example 69:
dockerize
command to the scripts block (ch4/sample-app/package.json)dockerize
command to the scripts block (ch4/sample-app/package.json)If you look into package.json, you will now have a new dependencies
section, as shown in
-Example 70:
Now that Express is installed, you can rewrite the code in app.js to use the Express framework, as shown in -Example 71:
+Example 70:There’s one more thing you need to do now that the app has dependencies: you need to update the Dockerfile to -install dependencies, as shown in Example 72:
+install dependencies, as shown in Example 71:npm install
(ch4/sample-app/Dockerfile)npm install
(ch4/sample-app/Dockerfile)Consider the Node.js sample app you’ve been using in this blog post, as shown in -Example 73:
+Example 72:Create a file called reverse.js with the contents shown in Example 74:
+Create a file called reverse.js with the contents shown in Example 73:
This will update package.json with a new devDependencies
section, as shown in Example 75:
This will update package.json with a new devDependencies
section, as shown in Example 74:
Next, update the test
command in package.json to run Jest, as shown in Example 76:
Next, update the test
command in package.json to run Jest, as shown in Example 75:
Now you can start writing tests. Create a file called reverse.test.js with the contents shown in -Example 77:
+Example 76:reverseWords
function (ch4/sample-app/reverse.test.js)reverseWords
function (ch4/sample-app/reverse.test.js)Add a second test for reverseWords
as shown in Example 78:
Add a second test for reverseWords
as shown in Example 77:
reverseWords
function (ch4/sample-app/reverse.test.js)reverseWords
function (ch4/sample-app/reverse.test.js)reverseWords
(ch4/sample-app/reverse.js)reverseWords
(ch4/sample-app/reverse.js)--coverage
flag to the test
command in package.json, as shown in
-Example 80:
+Example 79:
Now that code coverage has helped you see where your tests are lacking, head into reverse.test.js, and add a new unit
-test for reverseCharacters
, as shown in Example 81:
reverseCharacters
, as shown in Example 80:
reverseCharacters
(ch4/sample-app/reverse.test.js)reverseCharacters
(ch4/sample-app/reverse.test.js)First, update app.js to solely configure the Express app, and to export it, as shown in Example 82:
+First, update app.js to solely configure the Express app, and to export it, as shown in Example 81:
Next, create a new file called server.js that imports the code from app.js and has it listen on a port, as shown in -Example 83:
+Example 82:Make sure to update the start
command in package.json to now use server.js instead of app.js, as shown in
-Example 84:
Now, finally, you can add a test for the app in a new file called app.test.js, as shown in Example 85:
+Now, finally, you can add a test for the app in a new file called app.test.js, as shown in Example 84:
As an example, let’s add automated tests for the lambda-sample
OpenTofu module you built in Section 3.5.1.
+
As an example, let’s add automated tests for the lambda-sample
OpenTofu module you built in
-Copy that module, unchanged, from your work in that blog post into a folder for this
+Part 3. Copy that module, unchanged, from your work in that blog post into a
-blog post:
As an example, let’s use Terrascan as a static analysis tool for OpenTofu. First, create a config file for Terrascan -called terrascan.toml, with the contents shown in Example 86:
+called terrascan.toml, with the contents shown in Example 85:skip-rules
section of your Terrascan configuration file as shown in
-Example 87:
+Example 86:
Next, in the lambda-sample
module, create a file called deploy.tftest.hcl, with the contents shown in
-Example 88:
lambda-sample
module (ch4/tofu/live/lambda-sample/deploy.tftest.hcl)lambda-sample
module (ch4/tofu/live/lambda-sample/deploy.tftest.hcl)Fast
Fast/Moderate [32]
Fast/Moderate [33]
Slow
In What world-class software delivery looks like, you saw that companies with world-class software delivery processes are able to deploy +
In What World-Class Software Delivery Looks Like, you saw that companies with world-class software delivery processes are able to deploy thousands of times per day. Continuous integration—including a CI server and thorough automated test suite—is one of @@ -1600,7 +1600,7 @@
Inside the .github/workflows folder, create a file called app-tests.yml, with the contents shown in -Example 90:
+Example 89:Make a change to the sample app to intentionally return some text other than "Hello, World!", as shown in -Example 91:
+Example 90:Aha! The automated test is still expecting the response text to be "Hello, World!" To fix this issue, update -app.test.js to expect "Fundamentals of DevOps!" as a response, as shown in Example 92:
+app.test.js to expect "Fundamentals of DevOps!" as a response, as shown in Example 91:In the ci-cd-permissions folder, create main.tf with the initial contents shown in Example 93:
+In the ci-cd-permissions folder, create main.tf with the initial contents shown in Example 92:
github-aws-oidc
module (ch5/tofu/live/ci-cd-permissions/main.tf)github-aws-oidc
module (ch5/tofu/live/ci-cd-permissions/main.tf)gh-actions-iam-roles
, it lives in the ch5/tofu/modules/gh-actions-iam-roles folder, and it knows how to create
-several IAM roles for CI/CD with GitHub Actions. Example 94 shows how to update your
+several IAM roles for CI/CD with GitHub Actions. Example 93 shows how to update your
ci-cd-permissions
module to make use of the gh-actions-iam-roles
module:
@@ -2964,7 +2964,7 @@ gh-actions-iam-roles
module (ch5/tofu/live/ci-cd-permissions/main.tf)gh-actions-iam-roles
module (ch5/tofu/live/ci-cd-permissions/main.tf)You should also create a file called outputs.tf that outputs the testing IAM role ARN, as shown in -Example 95:
+Example 94:ci-cd-permissions
module (ch5/tofu/live/ci-cd-permissions/outputs.tf)ci-cd-permissions
module (ch5/tofu/live/ci-cd-permissions/outputs.tf)lambda-sample
module (ch5/tofu/live/lambda-sample/variables.tf)lambda-sample
module (ch5/tofu/live/lambda-sample/variables.tf)Next, update main.tf to use var.name
instead of any hard-coded names, as shown in Example 97:
Next, update main.tf to use var.name
instead of any hard-coded names, as shown in Example 96:
lambda-sample
module to use the name
input variable instead of hard-coded names (ch5/tofu/live/lambda-sample/main.tf)lambda-sample
module to use the name
input variable instead of hard-coded names (ch5/tofu/live/lambda-sample/main.tf)Example 99 shows the second half of the workflow:
+Example 98 shows the second half of the workflow:
You disconnect one v1 replica from the load balancer, shut down the server, and move its hard drive to a new v2 -server (since it’s a network-attached hard-drive, you do the move through software).[37] Once that new v2 server starts passing health checks, the load balancer starts sending traffic +server (since it’s a network-attached hard-drive, you do the move through software).[38] Once that new v2 server starts passing health checks, the load balancer starts sending traffic to it.
@@ -4764,7 +4764,7 @@It’s designed for 99.999999999% durability and 99.99% availability, which means you don’t need to worry too much -about data loss or outages.[40]
+about data loss or outages.[41] @@ -5698,7 +5698,7 @@It’s inexpensive, with most OpenTofu usage easily fitting into the AWS Free Tier.[41]
+It’s inexpensive, with most OpenTofu usage easily fitting into the AWS Free Tier.[42]
Within the tofu-state folder, create a main.tf file with the contents shown in Example 100:
+Within the tofu-state folder, create a main.tf file with the contents shown in Example 99:
state-bucket
module (ch5/tofu/live/tofu-state/main.tf)state-bucket
module (ch5/tofu/live/tofu-state/main.tf)backend
configuration. As a first step, add a backend.tf file to the
-tofu-state
module with the contents shown in Example 101:
+tofu-state
module with the contents shown in Example 100:
You should make the same change in the lambda-sample
module as well, adding the backend.tf file shown in
-Example 102:
lambda-sample
module to use S3 as a backend (ch5/tofu/live/lambda-sample/backend.tf)lambda-sample
module to use S3 as a backend (ch5/tofu/live/lambda-sample/backend.tf)Open up main.tf in the ci-cd-permissions
module and add the code shown in Example 103 to enable
+
Open up main.tf in the ci-cd-permissions
module and add the code shown in Example 102 to enable
creating IAM roles for both plan
and apply
:
ci-cd-permissions
module to enable IAM roles for plan
and apply
(ch5/tofu/live/ci-cd-permissions/main.tf)ci-cd-permissions
module to enable IAM roles for plan
and apply
(ch5/tofu/live/ci-cd-permissions/main.tf)The pipeline described here represents only a small piece of a real-world deployment pipeline.[43] It’s missing several important aspects, including:
+The pipeline described here represents only a small piece of a real-world deployment pipeline.[44] It’s missing several important aspects, including:
Let’s first create a workflow for the plan
portion. Create a new file called .github/workflows/tofu-plan.yml with
-the contents shown in Example 105:
tofu plan
(.github/workflows/tofu-plan.yml)tofu plan
(.github/workflows/tofu-plan.yml)Next, create a workflow for the apply
portion in a new file called _.github/workflows/tofu-apply.yml, with the
-contents shown in Example 106:
tofu apply
(.github/workflows/tofu-apply.yml)tofu apply
(.github/workflows/tofu-apply.yml)Make a change to the lambda-sample
module, such as changing the text it returns, as shown in
-Example 107:
And make sure to similarly update the assertion in the automated test in deploy.tftest.hcl, as shown in -Example 108:
+Example 107:Virtually every company that has a world-class software delivery process, as the ones you heard about in -What world-class software delivery looks like, relies heavily on CI/CD to allow them to go fast. This is one of the surprising realizations +What World-Class Software Delivery Looks Like, relies heavily on CI/CD to allow them to go fast. This is one of the surprising realizations of real-world systems: agility requires safety. With cars, speed limits are determined not by the limits @@ -7621,7 +7621,7 @@
AdministratorAccess
Managed Policy are not a big risk.
AdministratorAccess
Managed Policy are not a big risk.
+8. This is where the term bus factor comes from: your team’s bus factor is the number of people you canlose (e.g., because they got hit by a bus, or perhaps something less dramatic, like they changed jobs) before you can no longer operate your business. You never want to have a bus factor of 1.
+
- Tip
-
- |
-
-
+
|
-
-
Those are insane differences. To put that into perspective, we’re talking the difference between deploying once every +
Those are staggering differences. To put that into perspective, we’re talking the difference between deploying once every two weeks vs many times per day; deployment processes that take 36 hours vs 5 minutes; outages that last 12 hours vs 4 @@ -1454,7 +1440,7 @@
This book covers a lot of ground and includes a lot of detail: given the breadth of DevOps, this is unavoidable. To + +help you avoid missing the forest for the trees, I try to call out the key takeaways in each blog post + +as follows:
+ +
+
+ Tip
+
+ |
+
+
+
+ Key takeaway #1
+
+
+
+
+
+A key takeaway from the chapter. + + |
+
+
Pay special attention to these items, as they typically highlight the most important lessons in that + +post.
+ +You might want to check out this repo before you begin reading so you can follow along with all the examples on your own -computer:
+computer (if you are new to Git, check out the Git tutorial in Part 4):git clone https://github.com/brikis98/devops-book.git
+$ git clone https://github.com/brikis98/devops-book.git