Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Best Practices for Logging in AWS Lambda

DZone 's Guide to

Best Practices for Logging in AWS Lambda

We present some best practices for logging when it comes to using AWS Lambda. Logging is just as important in a serverless environment, after all.

· Cloud Zone ·
Free Resource

Today, we'll cover some things that you might find quite useful in your everyday work. We'll go through some of the best practices for logging into AWS Lambda, and we will explain how and why these ways will simplify your AWS Lambda logging. Let's start with the basics. What is logging?

Logging in AWS Lambda

AWS Lambda is a service that will automatically monitor Lambda functions for you, and it will report the metrics via Amazon CloudWatch. In order to help you with resolving the function failures, Lambda logs will manage all requests by your function as well as automatically store logs that are generated by your code through Amazon CloudWatch Logs.

You're able to put logging statements in your code, which will help in validating that your code is working correctly and as expected. Lambda integrates with CloudWatch Logs automatically, and it pushes all the logs from your code to a CloudWatch Logs group, which is associated with a Lambda function. It's worth mentioning that using Lambda logs means that there are no extra charges, but don't forget that the standard CloudWatch charges will apply.

Centralize the Logging In AWS Lambda

You should know that there are various log aggregation services like Splunk, Logz.io, Sumologic, Papertrail, Logzilla, Logmatic.io, and numerous others. Why do we need them? They exist to help us move the Lambda logs from our existing CloudWatch Logs to another more stable log service. You should know that, for example, when you're processing CloudWatch Logs with a Lambda function, you need to be mindful of the number of simultaneous executions it creates since CloudWatch Logs is an asynchronous event source for Lambda. Let's say that you have 100 functions that run simultaneously. Every one of them will push logs to CloudWatch Logs. Therefore, this is able to trigger 100 simultaneous executions of the log shipping function.

Furthermore, this event can potentially double the number of functions that are simultaneously running in your region. You should remember that there is a soft, regional limit of 1000 simultaneous executions for all functions. You can also make a Reversed Concurrency for the log shipping function, which will limit its maximum number of simultaneous executions, but there is a risk of losing logs when the log shipping functions are suppressed.

Streaming Logs

Take a look at the CloudWatch console and you'll see under the "Actions" tab an option to select a log group (one for every Lambda function), and then you can decide whether you should stream the data directly to Amazon's hosted Elasticsearch service. In case you're already a user of Elasticsearch service, this will come in quite handy, but if you're still thinking about the possible options, there are alternatives.

A more exciting way to stream your logs would be to stream them from CloudWatch Logs to a Kinesis stream first because, from Kinesis stream, Lambda function is able to process logs and forward them toward a log aggregation service of your choosing. With this way of log manipulation, you have complete control over the concurrency of the log shipping function. Following the increase of the log events, you're able to increase the shard number inside the Kinesis stream, which will furthermore lead to an increased number of simultaneous executions of the log shipping function.

When creating a new function from a Lambda console, you're able to choose from a number of blueprints to push CloudWatch Logs to some other log aggregation service. This option enables you to write a Lambda function, which will ship your CloudWatch Logs to a preferred log aggregation service, but there are some things you should be careful with.

New Log Groups Auto-Subscription

Every time you create a new Lambda function, a new log group will automatically be formed in CloudWatch Logs. What you want to avoid is a manual process of subscribing log groups to your function. Enabling CloudTrail and setting up an event pattern in CloudWatch Events will allow you to invoke different Lambda functions every time a log group is created. This setup can be done manually from the CloudWatch console. In case you're working with several different AWS accounts, it's advisable to avoid making a manual configuration.

Serverless Framework allows you to set up the event source for this function in the serverless.yml file. Remember that you must avoid subscribing the log group for the ship-logs function to itself since it will create an infinite invocation loop. After Lambda creates a new log group for your function, the retention policy is to keep them (forever!). It is totally unnecessary, while the storage for all these logs can cost pretty much since the logs can add up over time. Luckily, you can add another Lambda function, which can automatically update the retention policy to something much more adequate

Where Does That Leave Us?

Depending on what your needs are, you should either ship your logs to some of the log aggregation services or even stream your logs using Kinesis log streaming. Be careful when thinking about which log aggregate service you should choose since all of them have their perks and their faults. Try to avoid the infinite invocation loop; therefore the unnecessary added cost. Read the users comments and their experiences and their ups downs so you'll know what you can expect from every single aggregation service and generally how to avoid situations you don't want to end up in.

Topics:
serverless ,lambda monitoring ,aws lambda ,alerting ,cloud ,logging

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}