Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Monitoring Lambda Metrics with the ELK Stack – Part 1

DZone's Guide to

Monitoring Lambda Metrics with the ELK Stack – Part 1

Learn to monitor Lambda functions using a combination of ELK, Lambda, and CloudWatch to monitor your cloud and save you money.

· Performance Zone ·
Free Resource

xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.

monitoring lambda

This post is part 1 in a 2-part series about monitoring Lambda with Logz.io and the ELK Stack. Part 2 explores how to analyze and visualize the data, as well as configuring alerts.

The ultimate goal behind serverless computing is to allow developers to focus primarily on their code. While going a long way towards this goal, AWS Lambda is not without its pitfalls, and still requires developers to pay close attention to how the code executes and performs. As a pay-as-you-go service, Lambda costs are directly affected by performance. The fewer issues you have, the less you will end up paying.

Monitoring our Lambda functions can help in this regard, and in this series we will examine how to implement a monitoring system built on an integration between Lambda, CloudWatch and the Logz.io ELK Stack. Lambda exposes some useful metrics in CloudWatch that can be used for monitoring, and we will use a dedicated Lambda function to ship these metrics for analysis and visualization in Logz.io.

CloudWatch Lambda Metrics

AWS Lambda reports eight different metrics to CloudWatch which we can use for monitoring:

  • Invocations – Counts the number of times Lambda is invoked by an event or an API call. Assuming running duration and memory usage are the same, more invocations mean a pricier future bill. In addition, you can identify errors if you exceed an expected number of invocations, or find no invocations at all.
  • Errors – Counts the number of failed Lambda invocations. Every time Lambda fails with a response error 4XX, it counts as an error. You can use this metric to track third-party Lambda exceptions or to alert you when Lambda overreaches its memory and run-time limits.
  • Dead Letter Errors – Dead letter queue offers Lambda to write its payload to SQS when execution fails. If Lambda fails to write its payload, it is highly important you know about it, and the metric is incremented.
  • Duration – Measures Lambda runtime in milliseconds. Helps to make sure our function doesn’t cross timeout settings. Furthermore, like previously mentioned, the longer Lambda runs the more AWS will charge for it.
  • Throttles – The number of Lambda invocations that were throttled. You can prevent this from happening by closely monitoring your invocations metrics.
  • IteratorAge – Emitted for a stream-based invocations only. Measures the time difference between the moment Lambda receives a batch from a stream, and the time it was written to it. Using this metric, you can find out if your stream processing Lambda runs longer than it should.
  • ConcurrentExecutions – Measures the sum of concurrent executions at a given point in time. There are many good reasons to limit Lambda concurrent executions. One of them can be Lambdas that scale automatically based on demand.
  • UnreservedConcurrentExecutions – Represents the sum of concurrency that does not have a custom concurrency limit specified. You can find it in the bottom left lambda page, under the Concurrency section, the value is the total account concurrent execution limit minus the total reserved concurrency.

Shipping the Lambda Metrics to Logz.io

Now that we’ve got a basic understanding of what data can be used for monitoring our Lambda functions, let’s begin the process of shipping it to Logz.io. To do this we will create a Lambda shipper.

As a side note, it’s worth mentioning that the code of the function we are about to use can easily be modified to collect and send data from other AWS services that publish metrics to CloudWatch. We will cover these use cases in future articles.

Creating a New Function

Our first step is to create a new Lambda function.

Open the AWS Lambda console, and click Create function.

Create Function

Select Author from scratch, and enter the following details:

  • Name – Enter a name for your Lambda function.
  • Runtime – From the drop-down menu, select Python 2.7.
  • Role – Make sure to add the following policy to your Lambda role:
{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "Stmt1338559372809",
        "Action": ["cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "cloudwatch:DescribeAlarms"],
        "Effect": "Allow",
        "Resource": "*"
    }]
}


Hit the Create Function button in the bottom right corner of the page to continue.

Adding the Lambda code

In the Function Code section, select Edit code inline from the Code entry menu, and put the code from our git into the editor.

In the Handler field, verify that the function name matches the .py file name and that the handler name matches the Lambda handler inside the .py file.

function code

In the Environment variables section, enter the following details:

  • FILEPATH – Path name, absolute or relative to the current working directory, of the configuration file (see below).
  • TOKEN – Your Logz.io token. It can be found in your Logz.io app account settings.
  • URL – If you are in the EU region insert https://listener-eu.logz.io:8071. Otherwise, use https://listener.logz.io:8071. You can tell which region you are in by checking the login URL. If your environment says app.logz.io then you are in the US, if it says app-eu.logz.io then you are in the EU.

environment variables

Next — configure execution settings for the Lambda.

Just like with any Lambda function, these settings will depend on your specific usage of the function. We recommend setting the memory to 512MB, and defining a 3MIN timeout. You can adjust these settings later based on the Lambda logs in CloudWatch.

basic settings

Adding the Metrics Configuration File

The next step involves adding a JSON configuration file that specifies the metric you want Lambda to pull and send.

Before we describe the parameters in the configuration file we first need to get familiar with some CloudWatch concepts.

Namespaces

Namespaces are containers for CloudWatch metrics, and part of their job is to isolate metrics from each other so they don’t mistakenly get aggregated into the same group of statistics.The namespace we will use for our blog demonstrating is "AWS/Lambda" Click here if you want to check out the full namespaces list.

Dimensions

Dimensions are name/value pairs that uniquely identify a metric. Dimensions allow you to refine the metric statistic you are retrieving from CloudWatch, and in our Lambda implementation, we will use it to aggregate all the data produced by an AWS service.

To make it easier to understand how to configure the JSON configuration file for our Lambda, we decided to comply with the “list_metrics” function from the boto3 documentation, in addition to a few more parameters we allow to configure.

{
    "TimeInterval": int,
    "Period": int,
    "Statistics": ["Average", "Minimum", "Maximum", "SampleCount", "Sum"],
    "ExtendedStatistics": ["string", ],
    "Configurations": [{
        "Namespace": "string",
        "MetricName": "string",
        "Dimensions": [{
            "Name": "string",
            "Value": "string"
        }]
    }]
}


Parameters:

  • TimeInterval [REQUIRED] – The time period to monitor, in minutes, before the Lambda was invoked. Set to the same value as the scheduled event time interval.
  • Period [REQUIRED] – The granularity, in seconds, of the returned data points. For metrics with regular resolution, a period can be as short as one minute (60 seconds) and must be a multiple of 60.
  • Statistics – The metric statistics.
  • ExtendedStatistics – The percentile statistics. Specify values between p0.0 and p100. You can have Statistics or ExtendedStatistics in your configuration file, but notboth.
  • Configurations [REQUIRED] – A list of JSON’s. Each of them consists of “Namespace” key and an optional “Dimensions” and “MetricName” keys.

Right-click on your Lambda function folder, and select New File.

Give it a name and add it to FILEPATH in the Environment variables section.

Defining the Lambda trigger

We defined in the configuration file, that each time our Lambda executes, it asks for metrics from the last five minutes, with a granularity of 60 seconds. We will therefore configure our trigger accordingly, to execute the Lambda every five minutes.

In the Add Triggers section at the top of the page, select the CloudWatch Events trigger.

designer

Configure the trigger as follows:

triggers

Don’t forget to save your configurations.

Congratulations! You have successfully created a function for shipping Lambda metrics to Logz.io. After a few minutes, you should begin to see these metrics showing up in your Discover tab in Kibana.

What’s next? How do you leverage the ELK Stack for analyzing and visualizing the data for monitoring Lambda? That’s exactly what we will elaborate upon in the next part of this series. We will provide examples of how to query the data, create visualizations, and more.

Stay tuned!

Discovering, responding to, and resolving incidents is a complex endeavor. Read this narrative to learn how you can do it quickly and effectively by connecting AppDynamics, Moogsoft and xMatters to create a monitoring toolchain.

Topics:
performance ,cloud monitoring ,lambda monitoring ,aws ,cloudwatch ,elk stack

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}