4 Tips for AWS Lambda Cost & Speed Optimization

Written by taavi-rehemagi | Published 2021/05/06
Tech Story Tags: serverless | aws-lambda | aws | aws-services | production-ready-serverless | production | cloud-computing | devops

TLDR Taavi Taavi-rehemagi Taavi@ tahemägi CEO of Dashbird shares some tips and best practices for building production-ready Lambdas with optimal performance and cost. He recommends setting up performance tracking for your Lambda functions and getting alerts on failures. The amount of virtual CPU cores allocated to a Lambda function is linked to the memory provisioned for that function. A function with 256MB of memory will have roughly twice the CPU from a 128MB function. If you configure the current maximum of 10GB memory, you get 6 virtual CPU. cores.via the TL;DR App

By the end of this AWS Lambda optimization article, you will have a workflow of continuously monitoring and improving your Lambda functions and getting alerts on failures.
Serverless has been the MVP for the last couple of years and I’m betting it’s going to play a bigger role next year in backend development.
AWS Lambda is the most used and mature product in the Serverless space today. That’s why I’m going to share some tips and best practices for building production-ready Lambdas with optimal performance and cost.

1. Tracking Performance and Cost of Lambdas

Before making any changes to your functions, I recommend setting up performance tracking for your functions. Monitoring invocation counts, durations, memory usage, and cost of your Lambda functions allows you to pinpoint issues and make informed decisions fast. You can use Dashbird since it’s easy to set up. It relies on CloudWatch Logs, is much easier to use, and gives you quick access and visualization to your mission-critical AWS data.
Let’s dive into the different strategies to turbocharge your functions.

2. Optimal memory provisioning and Lambda performance tuning

The amount of virtual CPU cores allocated to your Lambda function is linked to the memory provisioned for that function. A function with 256MB of memory will have roughly twice the CPU from a 128MB function. If you configure the current maximum of 10GB memory, you get 6 virtual CPU cores. Memory size also affects cold start time linearly.
Considering the cost increase of more memory, developers choose to optimize either for speed or cost.
Here’s a good example from “Cost and Speed Optimization” by Alex Casalboni:
“In terms of cost, the 128MB configuration would be the cheapest (but very slow!). Interestingly, using the 1536MB configuration would be both faster and cheaper than using 1024MB. This happens because the 1536MB configuration is 1.5 times more expensive, but we’ll pay for half the time, which means we’d roughly save 25% of the cost overall.”
Lambda performance tuning can be done manually or through external tools. It will run your functions multiple times with different memory configurations. This way, you can choose what you deem the best for your particular use case.

3. Re-use Lambda containers

AWS reuses Lambda containers for subsequent calls if the next call is within 5 minutes. This allows developers to cache resources and implement connection pooling. Below is a checklist that you can go through when thinking in that direction.
- Put everything that doesn’t change between requests outside of your function code. In the Node.js runtime, code outside of your handler function will only be executed once on a cold start.
- Store and reference external configurations and dependencies locally after first execution and limit the re-initialization of variables/objects on every invocation. Instead, use static initialization/constructor, global/static variables, and singletons—Keep-alive and reuse connections (HTTP, database, etc.) established during the first invocation.
Here’s some material on how to do that:
Avoid using recursive code in your Lambda function, wherein the function automatically calls itself until some arbitrary criteria is met. This could lead to an unintended volume of function invocations and escalated costs.
Don’t do this:
Do this:
Cache reusable resources with LRU methods. This is a way of improving performance for unbound data. If your data grows without bounds, it’s possible that it won’t fit into memory at one point. But you can still apply a caching strategy for the most important items and throw out the items that aren’t used as often.
In the following code example, the in-memory cache only stores up to 100 items; then, it starts dropping the oldest ones. This way, the memory doesn’t overflow when the data is unbound. The expensiveApiCall function is only called if the item can’t be found in the cache.
- Beware of the provisioned memory here, which is 10GB at most! Lambda’s memory limit was increased and is now 10GB; it also comes with 6 vCPUs, which should solve most requirements. If your data doesn’t fit into 10GB and caching isn’t an option, you have to fan out multiple Lambda invocations. Scale horizontally, not vertically.
- Avoid calling Lambda functions directly from Lambda functions. Don’t do it recursively with itself, nor with other functions. If you call a Lambda function directly from a Lambda function and wait for the result, you pay for two: the one that waits and the one that is called. If you need to link multiple Lambda functions, let SQS, SNS, Kinesis, or Step Functions handle their interactions. This way, your function can finish quickly, and a cheaper service will do the waiting for the next function for you.

4. Log-Based Lambda Monitoring and Error Handling

AWS Lambda running slow?
With serverscollecting performance metrics and tracking failed executions is normally done by an agent that collects telemetry and error information and sends it over HTTP. With AWS Lambda, this approach can slow down functions and, over time, add quite a bit of cost. Not to mention the extra overhead that comes from adding (and maintaining) third-party agents over possibly large amounts of Lambda functions.
The great thing about Lambda functions is that all performance metrics and logs are sent to AWS CloudWatch. In itself, CloudWatch is not the perfect place to observe and set up error handling, but some services work on top of it and do a good job of providing visibility into your services.

Summary

There’s a lot of room to optimize your Serverless stack, and it all starts with knowing the right ways to do it and locating the issues. I recommend following all of the instructions above and testing the performance difference after making Lambda functions changes. Keep in mind that performance is critical in API endpoints and functions that have high execution volumes. Of course, make sure to stay on top of your systems with a serverless dashboard.

Written by taavi-rehemagi | CEO of Dashbird. 13y experience as a software developer & 5y of building Serverless applications.
Published by HackerNoon on 2021/05/06