Ever wish you could combine the portability of containers, with the scalability of Lambda functions? Well, now you can! Recently AWS released a new way for developers to package and deploy their Lambda functions as “Container Images”. This enables us to build a Lambda with a docker image of your own creation. The benefit of this is we can now easily include dependencies along with our code in a way that is more familiar to developers. If you have used docker containers before, then this is much simpler to get started with than the other option - Lambda layers. AWS has provided developers with a number of base images for each of the current Lambda runtimes (Python, Node.js, Java, .NET, Go, Ruby). It is easy for a developer to then use one of these images as a base, and build their own image on top. Of course, there are many sensible use cases for container images. Perhaps you want to include some machine learning dependencies? Maybe you would love to have FFMPEG in your lambda for your video processing needs? Or you want to nuke your entire AWS account to avoid a hefty bill? You heard me, in this blog article, we are going to build a container image with installed! This will delete everything in an AWS account (excluding our fancy new container image lambda). Nuke is built using Go but we are going to get started with the and build our own Lambda using JavaScript. This library isn’t available on NPM, so there is no easy way to pull it into our Lambda function, but with container images we see that it provides a way for developers to mix and match different tools to build a scalable solution to the problem they are trying to solve. aws-nuke node.js base image To get started with our new container image, we can create a Dockerfile like so: public.ecr.aws/lambda/nodejs: FROM 12 COPY ./lambda/nuke.js ./lambda/package*.json ./ RUN npm install CMD [ ] "nuke.lambdaHandler" As you can see we are building from the lambda/nodejs:12 base image, and copying over our Lambda function code. Notice the last line of our Dockerfile, CMD [ "nuke.lambdaHandler" ]. Because we are using one of the base images, it comes pre-installed with the Lambda Runtime Interface Client The runtime interface client in your container image manages the interaction between Lambda and your function code. The Runtime API, along with the Extensions API, defines a simple HTTP interface for runtimes to receive invocation events from Lambda and respond with success or failure indications. Therefore CMD [ "nuke.lambdaHandler" ] lets the interface client know what handler function to call when it receives an invocation event. Before we add the nuclear option, lets create the skeleton for our handler function: exports.lambdaHandler = (event) => { response = { : }; response; }; async const statusCode 200 return For now, it simply returns a 200 response. Not only does our container image include the Lambda Runtime Interface Client, it also includes the . This allows you to test your function locally, which in my opinion, is one of the killer reasons to adopt container images for your project. Runtime Interface Emulator Given we have a project structure like this: . ├── Dockerfile ├── docker-compose └── lambda ├── nuke └── package.json .yml .js Then to build our container image, we simply use the docker cli: -f ./Dockerfile -t instil-nuke . docker build And to run it locally: docker run -p : instil-nuke 9000 8080 Then to test our function locally, we just need to hit our lambda with an http request. In this example, we are posting an empty JSON body: ➜ curl -XPOST -d { : } "http://localhost:9000/2015-03-31/functions/function/invocations" '{}' "statusCode" 200 % The url seems strange, but the Runtime Interface Emulator is simply providing an endpoint that matches the . The only difference between this local URL and the real API URL is that our function name is hardcoded as a function. Invoke endpoint of the Lambda API Being able to run our function locally like this greatly reduces the feedback loop when developing your Lambda. There are other options out there for running Lambdas locally; for example, , but the container image approach gives you a local test environment that is much closer to how it will be run on AWS. sam local Now that we have our project structure in place, let's take a look at adding aws-nuke to our container image. public.ecr.aws/lambda/nodejs: FROM 12 LABEL maintainer= "Instil <team@instil.co>" RUN yum -y update RUN yum -y install tar gzip COPY ./resources/aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz ./resources/nuke-config.yml ./ RUN tar -xzf ./aws-nuke-v2.15.0.rc.3-linux-amd64.tar.gz && mv aws-nuke-v2.15.0.rc.3-linux-amd64 aws-nuke COPY ./lambda/nuke.js ./lambda/package*.json ./ RUN npm install CMD [ ] "nuke.lambdaHandler" Adding dependencies is just how you would expect if you have used docker before. In the above example, we are adding dependencies using yum and copying nuke onto our image. Then all we need to do it update our function to execute aws-nuke. { execSync } = ( ); { .log(command); result = execSync(command, { : }); (result) { .log(result.toString()); } } { .log( ); accessKey = process.env.AWS_ACCESS_KEY_ID; secretAccessKey = process.env.AWS_SECRET_ACCESS_KEY; sessionToken = process.env.AWS_SESSION_TOKEN; run( ); .log( ); } exports.lambdaHandler = (event) => { nuke(); response = { : }; response; }; const require 'child_process' ( ) function run command console const stdio 'inherit' if console ( ) function nuke console "Nuking this AWS account..." const const const `./aws-nuke -c nuke-config.yml --access-key-id --secret-access-key --session-token --force --force-sleep 3` ${accessKey} ${secretAccessKey} ${sessionToken} console "Your AWS account has been nuked, you can sleep peacefully knowing that you will no longer get an unexpected bill." async const statusCode 200 return We can use execSync to execute a command in our running lambda, it’s easy to see how simple it is to utilise external dependencies in our Lambda environment with this new container image option. Notice that we are pulling AWS access tokens from environment variables so that nuke can use them, this is for Lambda functions and they are the access keys obtained from the function’s execution role. default behaviour With our updated container image ready to nuke our account, all we need to do is deploy it. For this we need to create an ECR repository and push our image to it: [ . 1. | [ . 1. [ . 1. # Replace [AWS_ACCOUNT_NUMBER] with your own AWS account number aws ecr create-repository --repository-name instil-nuke --image-scanning-configuration scanOnPush= true docker tag instil-nuke:latest AWS_ACCOUNT_NUMBER] dkr. ecr. eu-west- amazonaws. com/ instil-nuke:latest aws ecr get-login-password docker login --username AWS --password-stdin AWS_ACCOUNT_NUMBER] dkr. ecr. eu-west- amazonaws. com docker push AWS_ACCOUNT_NUMBER] dkr. ecr. eu-west- amazonaws. com/ instil-nuke:latest Now that our container image lives in AWS, we just need to create our Lambda function. In the Create function page of the AWS management console, you will notice there is a new option to use a Container image as your starting point: Choosing this option then enables you to pick your container image; click the Browse images button to select your freshly uploaded image. You should be left with something like this: And that’s it! All that’s left to do is trigger our Lambda function. For our example, we could detonate the nuke once we get a billing alarm over a certain threshold but for the sake of keeping this article focused on container images, let's just trigger it with a test event for now and inspect the output. We will publish another article in the future explaining how to hook this up to a billing alarm. Notice the very disappointing output of our detonation: The above would be deleted the supplied configuration. Provide --no-dry-run actually destroy . resources with to resources You didn’t think I was actually going to did you? 😊 nuke my AWS account, This article was originally posted here