Marco L├╝thy


How to beat the AWS Lambda deployment limits

Deploying function code from S3 allows for substantially larger deployment packages when compared to directly uploading to Lambda.

Over the past few months I’ve been spending a lot of time on projects like Serverless Chrome and on adventures recording video from headless Chrome on AWS Lambda. Consequently, I’ve had to worry about the size of my Lambda function deployment packages. When compressed, Headless Chrome itself gobbles up 48 MB. This is just 2 MB shy of the 50 MB limit documented in the AWS Lambda Developer Guide. Fortunately the 50 MB limit is kind of a lie.

The documented limits in the AWS Lambda Developer Guide

Most developers seem to interpret the documented Lambda limits as the limit. So did I, until I tried to cram both Headless Chrome and FFmpeg into a single Lambda function. That’s a combined 64 MB compressed. To my surprise, AWS didn’t complain! In practice, the limit for the Lambda function deployment package size is effectively double the documented default limit if you have AWS Lambda pull your deployment package from S3 rather than directly uploading it. The technical limit is higher. What’s unclear is whether this limit differs from user to user as the documentation seems to be careful to label the 50 MB limit as a default limit. Many default AWS limits can be raised with a Service Limit Increase support request.

“So what are the actual limits?” A few others and myself asked ourselves. Well, let’s find out. We can test the actual limits by uploading a few dummy Lambda functions of varying sizes. There are two ways to get your Lambda function’s code into AWS Lambda: either directly uploading the function’s deployment package, or having Lambda pull it from S3. We’ll test both.

Preparing Test Deployment Packages

We need to create some deployment packages to test with. Each needs to be a controllable size. We can do this using the Unix command-line utility dd to create random data of specified size. Zip files use the deflate algorithm to reduce file size by merging repetition in data (that’s a slight oversimplification.) Since there’s no repetition in randomness, we can use random data to create deployment packages of exactly any size we want. For example, zipping 1 MB of random data will give us a zip file that’s 1 MB. 10 MB of random data, 10 MB zip file, 100 MB of random data, 100 MB zip file, etc.

First we create a few test files:

dd if=/dev/urandom of=49MB.txt  bs=1048576 count=49
dd if=/dev/urandom of=50MB.txt bs=1048576 count=50
dd if=/dev/urandom of=100MB.txt bs=1048576 count=100
dd if=/dev/urandom of=249MB.txt bs=1048576 count=249
dd if=/dev/urandom of=250MB.txt bs=1048576 count=250

Here we’re using the dd program to create 5 files of random “data” read from /dev/urandom in chunks (bs) of 1 MB (1,048,576 bytes is 1024 kilobytes, which is 1 megabyte). We picked these sizes because they’re just under and at the limits described in the documentation:

  • 50 MB: Lambda function deployment package size (compressed .zip/.jar file)
  • 250 MB: Size of code/dependencies that you can zip into a deployment package (uncompressed .zip/.jar size)

Next we zip each file:

zip  49MB.txt
zip 50MB.txt
zip 100MB.txt
zip 249MB.txt
zip 250MB.txt

If we check each zip file, we see that zip wasn’t able to deflate (compress) our random data:

$ ls -lhtr | grep zip
-rw-r--r-- 1 marco wheel 49M Jul 3 21:11
-rw-r--r-- 1 marco wheel 50M Jul 3 21:11
-rw-r--r-- 1 marco wheel 100M Jul 3 21:11
-rw-r--r-- 1 marco wheel 249M Jul 3 21:11
-rw-r--r-- 1 marco wheel 250M Jul 3 21:13

The zip files are as large as the files we zipped! Next, we need to set up AWS for our Lambda tests.

Boring AWS Prerequisites

We’re ready for a fun—not-so-fun—adventure. An adventure with the AWS CLI, that is. We’ll skip over credentials set up. Before we can create a new Lambda function, we need to create an IAM Role for it to use. We don’t really care about this Lambda function, or the role, and we’re never going to upload any actual code. We’ll tear down this setup at the end. We can create a role with the IAM create-role command like so:

aws iam create-role --role-name foobar --assume-role-policy-document '{"Version":"2012-10-17","Statement":{"Effect":"Allow","Principal":{"Service":""},"Action":"sts:AssumeRole"}}'

Now we’re ready to create the test Lambda function and test the differently sized deployment packages.

Test: Upload Code Directly

To create the Lambda function we use the create-function command. We’ll use the 49 MB zip file for our deployment package. If you’re following along, make sure to replace 000000000000 with your AWS Account ID:

aws lambda create-function --function-name limits-test --runtime nodejs6.10 --role arn:aws:iam::000000000000:role/foobar --handler foobar --region us-east-1 --zip-file fileb://./

That worked. No surprise, though. We haven’t exceeded the 50 MB limit that’s in the documentation. Next we try the 50 MB zip file. This time we use the update-function-code command:

aws lambda update-function-code --function-name limits-test --region us-east-1 --zip-file fileb://./

Here, we run into the first limit. The command results in the following error:

An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation: Request must be smaller than 69905067 bytes for the UpdateFunctionCode operation

Interestingly the error states that the limit is 69905067 bytes which is just under 70 MB. That’s more than 50 MB! Notably, the error specifically addresses the request and not the deployment package (zip file). The 20 MB addition presumably is there there to account for request overhead involved with the AWS API (e.g. base64 encoding of the zip file data). So far the 50 MB limit holds true-ish. But, we’re not defeated yet.

Test: Upload Code via S3

There’s a second way to upload Lambda function code: via S3. Let’s try that next. First, we create an S3 bucket with the AWS CLIs S3 mb command. Bucket names are shared globally across all AWS users, so change limits-test-foobar-bucket to something unique.

aws s3 mb s3://limits-test-foobar-bucket --region us-east-1

Next, we upload each of our zip files to the newly created S3 bucket. This will transfer 698 MB of compressed-but-not-so-compressed random junk to S3—a candidate for one of the most wasteful, useless things one can do with technology and their Internet connection. Perhaps it’ll waste someone’s time over at your local spy agency:

aws s3 cp ./ s3://limits-test-foobar-bucket/ --recursive --exclude "*" --include "*.zip"

With the zip files uploaded to S3, we try the update-function-code command again, but this time specifying our S3 bucket and zip file’s object key instead of uploading the zip file directly:

aws lambda update-function-code --function-name limits-test --region us-east-1 --s3-bucket limits-test-foobar-bucket --s3-key

And.. success! We were able to use a 50 MB deployment package. Cool. Let’s ramp it up. What about the 100 MB zip?

aws lambda update-function-code --function-name limits-test --region us-east-1 --s3-bucket limits-test-foobar-bucket --s3-key

Bam. No errors from AWS. Let’s get crazy. We’ll try the 250 MB zip:

aws lambda update-function-code --function-name limits-test --region us-east-1 --s3-bucket limits-test-foobar-bucket --s3-key

That was too much. We get the following error:

An error occurred (InvalidParameterValueException) when calling the UpdateFunctionCode operation: Unzipped size must be smaller than 262144000 bytes

The error sounds like the “Size of code/dependencies that you can zip into a deployment package (uncompressed .zip/.jar size)” limit. The 250 MB zip file is exactly 262144000 bytes (not smaller than.) We’ve found the true limit of the size of the uploaded deployment package: it’s the 250 MB uncompressed deployment code/dependencies limit. In other words, we can upload, via S3, any Lambda function zip who’s extracted contents are less than a combined 250 MB. Finally, let’s tear down and recap.

Cleaning Up

To wrap up our test, let’s clean up the mess we’ve made. We can delete the Lambda function with the AWS CLIs delete-function command, and the IAM role with the delete-role command:

aws lambda delete-function --function-name limits-test --region us-east-1
aws iam delete-role --role-name foobar

We can also delete the S3 bucket we made with the rb command. Warning: this will delete the bucket and everything in it.

aws s3 rb s3://limits-test-foobar-bucket --force

The Real-World Limits

We’ve discovered that the documented limit of 50 MB seems to be true when uploading a Lambda function’s deployment package directly. However, when using the S3 method to upload a Lambda function’s code, we can upload up to 250 MB! The limit in that case is the 250 MB uncompressed code/dependency size limit.

Blue are the people here that walk around

It appears that what’s important for the limit is the size of all of your code and its dependencies when uncompressed, extracted from your zip. For example, your Lambda function may depend on many 3rd party Node modules. Your node_modules folder can quickly add up to 250 MB when not compressed. In practical terms, this means that the size of your deployment package’s zip file would be around 100 MB—assuming a deflation of 40%.

Unfortunately, some tools like Apex don’t currently support the S3 method. I’m only aware of Serverless using S3 when deploying Lambda functions—because of it’s use of CloudFormation. Additionally, while I’ve not tested this, it’s possible that a larger deployment package zip may negatively impact your Lambda function’s cold start time — the larger the zip file to transfer to the functions container and decompress is, the longer it’ll take before the function can execute.

Your mileage may vary. The limits may differ between regions, and perhaps across AWS accounts, maybe even depending on runtime. Or not. AWS could also change them at any point—Use at your own risk.

Topics of interest

More Related Stories