AWS Lambda in Production: State of Serverless Report 2017
AWS Lambda in Production: State of Serverless Report 2017
Let's take a look at the state of AWS Lambda and serverless computing to see the most popular languages, how big serverless functions are, and other interesting tidbits.
Join the DZone community and get the full member experience.Join For Free
AWS Lambda has become one of the most talked about cloud services of 2017. So we took a look at how New Relic customers are using our out-of-the-box instrumentation for AWS Lambda, and discovered that most are starting small to experiment and get familiar with the technology.
Right now, it seems, lightweight functions written using scripting languages that execute in a few hundred milliseconds are far more common than larger, long-running functions; the median duration of monitored functions is just 510 milliseconds. As more people gain familiarity with serverless architectures by running experiments and prototyping, however, we expect to see an increase in the number of functions per account and greater overall function complexity. We’re also curious if the trend toward polyglot AWS Lambda environments will accelerate, as more than 75% of accounts were using multiple programming languages.
That’s just a couple of the conclusions we’ve drawn from our efforts to better understand configuration and usage patterns of organizations leveraging New Relic and AWS Lambda. As we celebrate the anniversary of the launch of New Relic Infrastructure and its instrumentation for AWS Lambda, we analyzed anonymized data on AWS Lambda usage with New Relic Insights. While the term “serverless” includes many services from multiple cloud providers, for this report we’ve explored only data from AWS Lambda.
This analysis is an extension of our recent survey—Achieving Serverless Success with Dynamic Cloud and DevOps—that revealed 43% of respondents were already using functions-as-a-service (FaaS) platforms like AWS Lambda, Azure Functions, or Google Cloud Functions in production.
What else has our research uncovered about how companies use AWS Lambda? We found that scripting languages like Node.js and Python are the most commonly used language runtime, that functions tend to have relatively small code size (except for Java functions), function configuration settings do not seem aggressively tuned for cost and performance, and that accounts usually deploy functions in a single region.
Let’s dive into the numbers to figure out what it all means.
Scripting Languages Like Node.js and Python Are the Most Commonly Used Language Runtime
AWS Lambda allows developers to write their function in any of several programming languages. This flexibility lets teams use familiar languages and adapt existing frameworks and libraries to new functions. It was still surprising, though, how many organizations used more than one runtime type; nearly 76% had created functions using more than one programming language.
Scripting languages were the most common runtime choice. For functions invoked over a one-week period in November, Python 2 was the most popular runtime—just under half of all functions used it. Node.js was the second most popular, with more than one-third of all functions written using Node 6.10 or 4.3. Far fewer developers are choosing Python 3 (just 2.73%) compared to Python 2.
Functions Tend to Have Relatively Small Code Size … Except Java Functions
The total code size of AWS Lambda functions—how much disk space the code and the dependencies of a function consume—tends to be small by modern software standards. Nearly half of the monitored functions could almost fit on a 3½-inch floppy disk.
Java functions were notable outliers. Their average code size was more than 20MB, indicating significantly larger function deployment sizes than Node.js or Python functions.
The bias towards smaller function code size suggests the majority of the New Relic-monitored functions running on AWS Lambda contain relatively few bundled dependencies or extensive business logic, instead pointing to potentially simpler functions. This supports the general best practice around creating small functions designed to perform a single, well-defined task.
It’s notable, however, that around 4% of even non-Java functions were larger than 20MB. We can infer those workloads are likely packaging extensive code or dependencies with the function that can’t be explained by runtime choice alone. We’re curious if this will increase over time as AWS Lambda users explore more exotic use cases.
The Median Function Timeout Setting Is 60 Seconds
AWS Lambda functions, which are billed in 100ms increments, are usually designed to be fast. With that in mind, the default timeout value for a new function is just three seconds. Surprisingly, most of the functions we saw overrode this limit, boosting the timeout value to 60 seconds. Because this was such a common override, we’re hypothesizing that the timeout default is too low for many function types and during development, it’s simply increased to a large number: 1 minute.
We did not see a clear relationship between timeout configuration and the memory setting—more memory did not seem to correlate with shorter timeout values. However, we know that the memory setting, which also allocates proportional CPU power, can have a significant impact on overall function performance. As more CPU power and memory are allocated to a function, the time it takes to invoke the function also decreases—potentially decreasing the cost to run the function.
.NET, Java, and Python 2.7 functions were more likely to be configured with a larger memory setting (512MB) than the default value (128MB).
The data suggests that memory isn’t extensively tuned for most functions. This is a potential opportunity for performance and cost optimization. Recently, when we explored tuning an AWS Lambda function’s memory, we found that using data to validate a memory increase actually ended up reducing the monthly operating cost of the function.
Functions Are Globally Distributed, but Not Usually Multi-Region
With AWS Lambda now available in almost every AWS region around the globe, we were not surprised to see functions being run in Europe, North America, and Asia. Still, regions in the United States accounted for the majority of monitored functions. Of course, because this data comes from New Relic’s customers using Lambda, the breakdown could be affected by the distribution of our joint global customer base.
We also explored how many accounts were deploying functions to multiple global regions. At the time of this report, most organizations were deployed in only a single region. However, a small number of accounts (less than 1%), deployed functions in more than nine AWS regions. We’re curious if the simpler deployment model of functions will make highly distributed functions in many global regions more common.
We'd Like to Hear How You're Using AWS Lambda and Serverless
While trial and experimentation with AWS Lambda appears to be the focus for the majority of AWS Lambda users who are monitoring it with New Relic, the data also signals that many different types of organizations are successfully creating functions that take advantage of serverless architecture and pricing models. Looking at the outliers in the data, we see intriguing possibilities for easily running functions around the world using multiple programming languages.
Insights from how New Relic customers are configuring and running serverless functions are helpful for the development and improvement of our products. Previously, we have looked at our own user data to uncover patterns and validation for Docker containers (2015, 2016, and 2017). As AWS Lambda and serverless technology grows and matures, we’re very interested in how those technologies are being used by our customers to build new systems.
We’re excited how New Relic’s platform will evolve to meet changing requirements in this new environment. As always, we want to continue to hear how you’re using serverless as part of your cloud adoption journey—please share with us in our survey.
This blog post originally appeared on New Relic.
Published at DZone with permission of Clay Smith , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.