Example Project from How to Deploy a Docker Container on AWS Lambda
AWS Lambda is a powerful computing model because it gives developers a known execution environment with a specific runtime that accepts and runs arbitrary code. But this also causes problems if you have a use case outside the environments predetermined by AWS.
To address this issue, AWS introduced Lambda Layers. Layers allow packaging .zip
files with the libraries and dependencies needed for the Lambda functions. But Lambda Layers still have limitations including testing, static analysis, and versioning. In December 2020, AWS Lambda released Docker container support.
Shortly after the announcement, the Serverless framework created the following example to demonstrate how to use the new feature. This blog post will break down that example by building the project from scratch. All the code for this project can be found on my GitHub.
git clone https://github.com/ajcwebdev/a-first-look.git
cd deployment/docker-lambda
Instead of globally installing the Serverless CLI, we have installed the serverless
package as a local dependency in our project. As a consequence, to execute sls
commands we must prefix the commands with yarn
or npx
. You can refer to the official Serverless documentation if you prefer to install the CLI globally.
Our project contains the following files:
app.js
for our Lambda function code that will return a simple message when invoked.Dockerfile
for defining the dependencies, files, and commands needed to build and run our container image.serverless.yml
for defining our AWS resources in code which will be translated into a single CloudFormation template that will generate a CloudFormation stack..gitignore
so we do not commit ournode_modules
or the.serverless
directory that contains our build artifacts and is generated when we deploy our project to AWS.
The Serverless Framework lets you define a Dockerfile and point at it in the serverless.yml
configuration file. The Framework makes sure the container is available in ECR and setup with configuration for Lambda.
service: ajcwebdev-docker-lambda
frameworkVersion: '3'
We select the AWS provider
and include an ecr
section for defining images that will be built locally and uploaded to ECR.
Note: If you are using an Apple M1, you will need to uncomment out the line that specifies
arm64
for thearchitecture
in theprovider
property.
provider:
name: aws
# architecture: arm64
ecr:
images:
appimage:
path: ./
The functions
property tells the framework the image reference name (appimage
) that is used elsewhere in our configuration. The location of the content of the Docker image is set with the path
property. We use the same value for image.name
as we do for the image we defined, appimage
.
functions:
hello:
image:
name: appimage
Here is our complete serverless.yml
file:
# serverless.yml
service: ajcwebdev-docker-lambda
frameworkVersion: '3'
provider:
name: aws
# architecture: arm64
ecr:
images:
appimage:
path: ./
functions:
hello:
image:
name: appimage
We are using the Node v14 image from the AWS ECR Gallery. The CMD
property defines a file called app.js
with a function called handler
.
# Dockerfile
FROM public.ecr.aws/lambda/nodejs:14
COPY app.js ./
CMD ["app.handler"]
app.js
contains the code that will be executed by our handler when the function is invoked. It will return a JSON object containing a message clarifying exactly why anyone would ever want to do this in the first place.
// app.js
'use strict'
module.exports.handler = async (event) => {
const message = `Cause I don't want a server, but I do still want a container`
return {
statusCode: 200,
body: JSON.stringify(
{ message }, null, 2
),
}
}
We are now able to generate our container, deploy it to ECR, and execute our function.
You will need to set your AWS credentials with the sls config credentials
command. This step can be skipped if you are using a global install of the CLI that is already configured with your credentials.
yarn sls config credentials \
--provider aws \
--key YOUR_ACCESS_KEY_ID \
--secret YOUR_SECRET_ACCESS_KEY
The sls deploy
command deploys your entire service via CloudFormation. In order to build images locally and push them to ECR, you need to have Docker installed and running on your local machine.
yarn sls deploy
The sls invoke
command invokes a deployed function.
yarn sls invoke --function hello
This will output the following message:
{
"statusCode": 200,
"body": "{\n \"message\": \"Cause I don't want a server, but I do still want a container\"\n}"
}