paint-brush
Exploring The Container Images Function in AWS Lambda by@instilhq
210 reads

Exploring The Container Images Function in AWS Lambda

by Instil SoftwareMarch 15th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AWS has released a new way for developers to package and deploy Lambda functions as “Container Images” This enables us to build a Lambda with a docker image of your own creation. The benefit of this is we can now easily include dependencies along with our code in a way that is more familiar to developers. Container images provide a way to mix and match different tools to help developers build a scalable solution to the problem they are trying to solve. In this article, we are going to get started with the new container image and build our own Lambda.

Company Mentioned

Mention Thumbnail
featured image - Exploring The Container Images Function in AWS Lambda
Instil Software HackerNoon profile picture

Ever wish you could combine the portability of containers, with the scalability of Lambda functions? Well, now you can! Recently, AWS released a new way for developers to package and deploy their Lambda functions as “Container Images”. This enables us to build a Lambda with a docker image of your own creation. The benefit of this is we can now easily include dependencies along with our code in a way that is more familiar to developers. If you have used docker containers before, then this is much simpler to get started with than the other option - Lambda layers.

AWS has provided developers with a number of base images for each of the current Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby). It is easy for a developer to then use one of these images as a base and build their own image on top.

Of course, there are many sensible use cases for container images. Perhaps you want to include some machine learning dependencies? Maybe you would love to have FFMPEG in your Lambda for your video processing needs? Or you want to nuke your entire AWS account to avoid a hefty bill?

In this article, we are going to build a container image with AWS-nuke installed! This will delete everything in an AWS account (excluding our fancy new container image Lambda). Nuke is built using Go but we are going to get started with the node.js base image and build our own Lambda using JavaScript. This library isn’t available on NPM, so there is no easy way to pull it into our Lambda function, but container images provide a way for developers to mix and match different tools to build a scalable solution to the problem they are trying to solve.

To get started with our new container image, we can create a Dockerfile like so:

FROM public.ecr.aws/lambda/nodejs:12

COPY ./lambda/nuke.js ./lambda/package*.json ./
RUN npm install
CMD [ "nuke.lambdaHandler" ]

As you can see, we are building from the

lambda/nodejs:12
 base image, and copying over our Lambda function code. Notice the last line of our Dockerfile, 
CMD [ "nuke.lambdaHandler" ]
. Because we are using one of the base images, it comes pre-installed with the Lambda Runtime Interface Client.

The runtime interface client in your container image manages the interaction between Lambda and your function code. The Runtime API, along with the Extensions API, defines a simple HTTP interface for runtimes to receive invocation events from Lambda and respond with success or failure indications.

Therefore 

CMD [ "nuke.lambdaHandler" ]
 lets the interface client know what handler function to call when it receives an invocation event.

Before we add the nuclear option, let's create the skeleton for our handler function:

exports.lambdaHandler = async (event) => {
    const response = { statusCode: 200 };
    return response;
};

For now, it simply returns a 200 response.

Not only does our container image include the Lambda Runtime Interface Client but it also includes the Runtime Interface Emulator. This allows you to test your function locally, which in my opinion, is one of the killer reasons to adopt container images for your project.

Given we have a project structure like this:

.
├── Dockerfile
├── docker-compose.yml
└── lambda
    ├── nuke.js
    └── package.json

Then to build our container image, we simply use the docker cli:

docker build -f ./Dockerfile -t instil-nuke .

And to run it locally:

docker run -p 9000:8080 instil-nuke

Then to test our function locally, we just need to hit our lambda with an http request. In this example we are posting an empty JSON body:

➜  curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
{"statusCode":200}%

The URL seems strange, but the Runtime Interface Emulator is simply providing an endpoint that matches the Invoke endpoint of the Lambda API. The only difference between this local URL and the real API URL is that our function name is hardcoded as a function.

Being able to run our function locally like this greatly reduces the feedback loop when developing your Lambda. There are other options out there for running Lambdas locally; for example, sam local, but the container image approach gives you a local test environment that is much closer to how it will be run on AWS.

Now that we have our project structure in place, let's take a look at adding AWS-nuke to our container image.


FROM public.ecr.aws/lambda/nodejs:12
LABEL maintainer="Instil <[email protected]>" 

RUN yum -y update
RUN yum -y install tar gzip

COPY ./resources/aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz ./resources/nuke-config.yml ./
RUN tar -xzf ./aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz && mv aws-nuke-v2.15.0.rc.3-linux-amd64 aws-nuke

COPY ./lambda/nuke.js ./lambda/package*.json ./
RUN npm install
CMD [ "nuke.lambdaHandler" ]

Adding dependencies is just how you would expect if you have used docker before. In the above example, we are adding dependencies using yum and copying the nuke onto our image.

Then, all we need to do is update our function to execute the AWS-nuke.

const { execSync } = require('child_process');

function run(command) {
    console.log(command);
    const result = execSync(command, {stdio: 'inherit'});
    if (result) {
        console.log(result.toString());
    }
}

function nuke() {
    console.log("Nuking this AWS account...");
    const accessKey = process.env.AWS_ACCESS_KEY_ID;
    const secretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
    const sessionToken = process.env.AWS_SESSION_TOKEN;
    run(`./aws-nuke -c nuke-config.yml --access-key-id ${accessKey} --secret-access-key ${secretAccessKey} --session-token ${sessionToken} --force --force-sleep 3`);
    console.log("Your AWS account has been nuked, you can sleep peacefully knowing that you will no longer get an unexpected bill.");
}

exports.lambdaHandler = async (event) => {
    nuke();
    const response = { statusCode: 200 };
    return response;
};

We can use 

execSync
 to execute a command in our running lambda, it’s easy to see how simple it is to utilize external dependencies in our Lambda environment with this new container image option. Notice that we are pulling AWS access tokens from environment variables so that nuke can use them, this is the default behaviour for Lambda functions and they are the access keys obtained from the function’s execution role.

With our updated container image ready to nuke our account, all we need to do is deploy it. For this we need to create an ECR repository and push our image to it:

# Replace [AWS_ACCOUNT_NUMBER] with your own AWS account number
aws ecr create-repository --repository-name instil-nuke --image-scanning-configuration scanOnPush=true
docker tag instil-nuke:latest [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest
aws ecr get-login-password | docker login --username AWS --password-stdin [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com
docker push [AWS_ACCOUNT_NUMBER].dkr.ecr.eu-west-1.amazonaws.com/instil-nuke:latest

Now that our container image lives in AWS, we just need to create our Lambda function. In the Create function page of the AWS management console, you will notice there is a new option to use a Container image as your starting point:

Choosing this option then enables you to pick your container image; click the Browse images button to select your freshly uploaded image. You should be left with something like this:

And that’s it! All that’s left to do is trigger our Lambda function. For our example, we could detonate the nuke once we get a billing alarm over a certain threshold. For the sake of keeping this article focused on container images, let's just trigger it with a test event for now and inspect the output. We will publish another article in the future explaining how to hook this up to a billing alarm.

Notice the very disappointing output of our detonation:

The above resources would be deleted with the supplied configuration. Provide --no-dry-run to actually destroy resources.

You didn’t think I was actually going to nuke my AWS account, did you? 😊

If you would like to speak to us about how we can help your business either with serverless training or software development, please get in touch by email.

This article was originally posted here.