By the end of this AWS Lambda optimization article, you will have a workflow of continuously monitoring and improving your Lambda functions and getting alerts on failures. for the last couple of years and I’m betting it’s going to play a bigger role next year in backend development. Serverless has been the MVP AWS Lambda is the and in the Serverless space today. That’s why I’m going to share some for building with . most used mature product tips and best practices production-ready Lambdas optimal performance and cost 1. Tracking Performance and Cost of Lambdas Before making any changes to your functions, I recommend for your functions. of your Lambda functions allows you to . You can use Dashbird since it’s easy to set up. It relies on CloudWatch Logs, is much easier to use, and gives you quick access and visualization to your . setting up performance tracking Monitoring invocation counts, durations, memory usage, and cost pinpoint issues and make informed decisions fast mission-critical AWS data Let’s dive into the different strategies to turbocharge your functions. 2. Optimal memory provisioning and Lambda performance tuning The amount of virtual allocated to your Lambda function is linked to the memory provisioned for that function. A will have roughly from a . If you configure the current maximum of 10GB memory, you get 6 virtual CPU cores. Memory size also affects cold start time linearly. CPU cores function with 256MB of memory twice the CPU 128MB function Considering the , developers choose to . cost increase of more memory optimize either for speed or cost Here’s a good example from “Cost and Speed Optimization” by : Alex Casalboni “In terms of cost, the 128MB configuration would be the cheapest (but very slow!). Interestingly, using the 1536MB configuration would be both faster and cheaper than using 1024MB. This happens because the 1536MB configuration is 1.5 times more expensive, but we’ll pay for half the time, which means we’d roughly save 25% of the cost overall.” can be done or through . It will run your functions multiple times with . This way, you can choose what you deem the best for your particular use case. Lambda performance tuning manually external tools different memory configurations 3. Re-use Lambda containers if the next call is . This allows developers to and . Below is a checklist that you can go through when thinking in that direction. AWS reuses Lambda containers for subsequent calls within 5 minutes cache resources implement connection pooling - Put everything that doesn’t change between requests outside of your function code. In the Node.js runtime, code outside of your handler function will only be executed once on a cold start. - . Instead, use static initialization/constructor, global/static variables, and singletons—Keep-alive and reuse connections (HTTP, database, etc.) established during the first invocation. Store and reference external configurations and dependencies locally after first execution and limit the re-initialization of variables/objects on every invocation Here’s some material on how to do that: Connection pooling for MongoDB. Connection pooling for PostreSQL & MySQL. , wherein the function automatically calls itself until some arbitrary criteria is met. This could lead to an unintended volume of function invocations and escalated costs. Avoid using recursive code in your Lambda function Don’t do this: Do this: Cache reusable resources with methods. This is a way of improving performance for unbound data. If your data grows without bounds, it’s possible that it won’t fit into memory at one point. But you can still apply a caching strategy for the most important items and throw out the items that aren’t used as often. LRU In the following code example, the ; then, it starts . This way, the . The expensiveApiCall function is . in-memory cache only stores up to 100 items dropping the oldest ones memory doesn’t overflow when the data is unbound only called if the item can’t be found in the cache - Beware of the provisioned memory here, which is 10GB at most! Lambda’s memory limit was increased and is now 10GB; it also comes with 6 vCPUs, which should solve most requirements. , you have to fan . Scale horizontally, not vertically. If your data doesn’t fit into 10GB and caching isn’t an option out multiple Lambda invocations - Avoid calling Lambda functions directly from Lambda functions. Don’t do it recursively with itself, nor with other functions. If you call a Lambda function directly from a Lambda function and , you pay for two: the one that waits and the one that is called. If you need to link multiple Lambda functions, let SQS, SNS, Kinesis, or Step Functions handle their interactions. This way, , and a for the next function for you. wait for the result your function can finish quickly cheaper service will do the waiting 4. Log-Based Lambda Monitoring and Error Handling AWS Lambda running slow? With , is normally done by an and sends it over HTTP. With , this approach and, over time, . Not to mention the over possibly large amounts of Lambda functions. servers collecting performance metrics and tracking failed executions agent that collects telemetry and error information AWS Lambda can slow down functions add quite a bit of cost extra overhead that comes from adding (and maintaining) third-party agents The is that all performance metrics and logs are sent to . In itself, CloudWatch is , but some services work on top of it and do a good job of providing visibility into your services. great thing about Lambda functions AWS CloudWatch not the perfect place to observe and set up error handling Summary There’s a lot of room to , and it all starts with and . I recommend following all of the instructions above and after making Lambda functions changes. Keep in mind that . Of course, make sure to stay on top of your systems with a serverless dashboard. optimize your Serverless stack knowing the right ways to do it locating the issues testing the performance difference performance is critical in API endpoints and functions that have high execution volumes