Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Top AWS Lambda Gotcha You Must Know Before Configuring Them

DZone's Guide to

Top AWS Lambda Gotcha You Must Know Before Configuring Them

We look at some common gotchas when it comes to configuring AWS Lambda instances to help you avoid them in the future.

· Cloud Zone ·
Free Resource

Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.

“Once You Set It, You Forget It.”

Often, we see several cloud practitioners quote this line about serverless technologies. While serverless services have brought so much convenience, they have brought few challenges as well along with them. Like no visibility into abstracted layers that the service provider takes care of! One such serverless service is AWS Lambda functions, where AWS allocates CPU power to each function on your behalf depending on the memory allocation. "Some abstractions do not actually simplify our lives as much as they were meant to do," like Joel Spolsky says in his article, "The Law of Leaky Abstractions."

"What's with Lambda? Misconfigured functions, most of the time?"

So, in this post, we walk you through how CPU allocation affects Lambda execution time.

AWS Lambda Function Memory & CPU Dynamics: Why You Should Know

When you're running a piece of code on a traditional server, you typically take advantage of server startup time and can monitor CPU utilization and memory utilization continually.

In the AWS cloud world, if you happen to use EC2s, you have the visibility into CPU allocation time, memory, IOPS, and network, etc.

In the case of serverless services, like Amazon Lambda function, you just run the code without provisioning or managing servers. So, with Lambda functions, we naturally tend to not worry about system level metrics, such as CPU allocation time as well as other metrics like disk utilization, I/O and the network. We tend to focus more on application metrics, right?

The Catch: CPU Allocation Per Memory Consumption

Recently, we ran a Lambda function code with 250 MB memory allocation, for experimentation purposes. This function after execution in actuality consumed just 170 MB.

This actual memory consumption often misleads several AWS users. When their code execution time moves up north, they tend to focus entirely on their code optimization, garbage collection, etc. and ignore other dynamics, such as the underlying CPU allocation for each batch of memory allocation.

When we increased our Lambda function memory allocation to 512MB, the execution time of the code magically reduced to 52 mSec, which is ~50% increase in execution speed!

Memory Allocated Actual Memory Consumption Execution Time
250 MB 170 MB 100 mSec
512 MB 170 MB 52 mSec


The result:

REPORT Duration: 48.44 ms Billed Duration: 50 ms Memory Size: 170 MB
REPORT Duration: 52.91 ms Billed Duration: 50 ms Memory Size: 170 MB


Serverless-as-a-Success: Debunking the AWS Lambda Performance Challenge

Even though your Lambda function code consumes memory within limits, a higher allocation of memory improves the performance by 50%!

According to AWS, a user needs to choose the amount of memory required for a function and it will allocate proportional CPU power according to memory allocation and other resources. You can update the configuration and request additional memory in 64 MB increments from 128MB to 3008 MB. Amazon allocates CPU power proportional to the memory using the same ratio as a general purpose Amazon EC2 instance type.

As Jeremy Dally mentions in one of his blog posts, "CPU resources, I/O, and memory are all affected by the memory allocation setting. If your function is allocated more than 1.8GB of memory, then it will utilize a multi-core CPU. Thus, if you have CPU intensive workloads, increasing your memory to more than 1.8GBs should give you significant gains. The same is true for I/O bound workloads like parallel calculations."

There have been several debates around CPU allocation in AWS Lambda since 2015. Mustafa Akin's post throws enough light on how CPU allocation happens for Lambdas. He recommends profiling the application and recognizing the bottlenecks before adjusting the function configuration for ideal memory settings.

Conclusion

In conclusion, the "CPU" performance scales with provisioned memory. You will have access to more compute power when the next new larger settings is chosen. But, do try out with different memory options before configuring Lambda functions the next time.

If you are looking to monitor your entire AWS Lambda functions' costs in 'seconds,' try TotalCloud Interactive Cost Analyzer.

Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.

Topics:
cloud ,aws ,lambda ,configuration ,gotcha ,memory consumption

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}