This is the first of a 3-part mini series on managing your AWS Lambda logs. In part 1 we will look at how you can get all of your logs off CloudWatch.
Part 2 will help you better understand the tradeoffs with different approaches to logging & monitoring, with some helpful tips and tricks that I have come across.
Part 3 will demonstrate how to capture and forward correlation IDs through various event sources — eg. API Gateway, SNS and Kinesis.
part 2 : tips and tricks
part 3 : tracking correlation IDs
During the execution of a Lambda function, whatever you write to stdout (eg. using console.log
in Node.js) will be captured by Lambda and sent to CloudWatch Logs asynchronously in the background, without adding any overhead to your function execution time.
You can find all the logs for your Lambda functions in CloudWatch Logs, organised into log groups (one log group per function) and then log streams (one log stream per container instance).
You could, of course, send these logs to CloudWatch Logs yourself via the PutLogEvents operation, or send them to your preferred log aggregation service such as Splunk or Elasticsearch. But, remember that everything has to be done during a function’s invocation. If you’re making additional network calls during the invocation then you’ll pay for those additional execution time, and your users would have to wait that much longer for the API to respond.
So, don’t do that!
Instead, process the logs from CloudWatch Logs after the fact.
In the CloudWatch Logs console, you can select a log group (one for each Lambda function) and choose to stream the data directly to Amazon’s hosted Elasticsearch service.
This is very useful if you’re using the hosted Elasticsearch service already. But if you’re still evaluating your options, then give this post a read before you decide on the AWS-hosted Elasticsearch.
Some things you should know before using Amazon’s Elasticsearch Service on AWS_Elasticsearch is a powerful but fragile piece of infrastructure with a ton of things that can cause the AWS service to…_read.acloud.guru
As you can see from the screenshot above, you can also choose to stream the logs to a Lambda function instead. In fact, when you create a new function from the Lambda console, there’s a number of blueprints for pushing CloudWatch Logs to other log aggregation services already.
Clearly this is something a lot of AWS’s customers have asked for.
You can find blueprints for shipping CloudWatch Logs to Sumologic, Splunk and Loggly out of the box.
So that’s great, now you can use these blueprints to help you write a Lambda function that’ll ship CloudWatch Logs to your preferred log aggregation service. But here are a few things to keep in mind.
Whenever you create a new Lambda function, it’ll create a new log group in CloudWatch logs. You want to avoid a manual process for subscribing log groups to your ship-logs
function above.
Instead, enable CloudTrail, and then setup an event pattern in CloudWatch Events to invoke another Lambda function whenever a log group is created.
You can do this one-off setup in the CloudWatch console manually.
Match the CreateLogGroup API call in CloudWatch Logs and trigger a subscribe-log-group Lambda function to subscribe the newly created log group to the ship-logs function you created earlier.
If you’re working with multiple AWS accounts, then you should avoid making the setup a manual process. With the Serverless framework, you can setup the event source for this subscribe-log-group
function in the serverless.yml
file.
Another thing to keep in mind is that, you need to avoid subscribing the log group for the ship-logs
function to itself — it’ll create an infinite invocation loop and that’s a painful lesson that you want to avoid.
Serverless: A lesson learned. The hard way._I've recently wrote about how I migrated my static website from a VPS to aserverless stack, it went great.. until the…_sourcebox.be
By default, when Lambda creates a new log group for your function the retention policy is to keep them forever. Understandably this is overkill and the cost of storing all these logs can add up over time.
By default, logs for your Lambda functions are kept forever
Fortunately, using the same technique above we can add another Lambda function to automatically update the retention policy to something more reasonable.
Here’s a Lambda function for auto-updating the log retention policy to 30 days.
If you already have lots of existing log groups, then consider wrapping the demo code (below) for auto-subscribing log groups and auto-updating log retention policy into a one-off script to update them all.
You can do this by recursing through all log groups with the DescribeLogGroups API call, and then invoke the corresponding functions for each log group.
You can find example code in this repo.
theburningmonk/lambda-logging-demo_lambda-logging-demo - Demo for shipping logs to ELK stack, and to auto-subscribe new log groups_github.com
Hi, my name is Yan Cui. I’m an AWS Serverless Hero and the author of Production-Ready Serverless. I have run production workload at scale in AWS for nearly 10 years and I have been an architect or principal engineer with a variety of industries ranging from banking, e-commerce, sports streaming to mobile gaming. I currently work as an independent consultant focused on AWS and serverless.
You can contact me via Email, Twitter and LinkedIn.
Check out my new course, Complete Guide to AWS Step Functions.
In this course, we’ll cover everything you need to know to use AWS Step Functions service effectively. Including basic concepts, HTTP and event triggers, activities, design patterns and best practices.
Get your copy here.
Come learn about operational BEST PRACTICES for AWS Lambda: CI/CD, testing & debugging functions locally, logging, monitoring, distributed tracing, canary deployments, config management, authentication & authorization, VPC, security, error handling, and more.
You can also get 40% off the face price with the code ytcui.
Get your copy here.