Over a million developers have joined DZone.

What Is Your Limit, AWS Lambda?

DZone's Guide to

What Is Your Limit, AWS Lambda?

In the time-honored development tradition of breaking things, we discuss some of the limitations of serverless platform AWS Lambda.

· Cloud Zone ·
Free Resource

Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform.


Let's do some background. AWS Lambda was started at 2014 by Amazon. It gives an "illusion" of an application running without a server, or should I said function, as this is the exact reason why AWS Lambda was created at first place. An organization using AWS Lambda does not need to manage infrastructure, can reduce costs, and, as there is no charge when your code is not running, it auto-scales automatically. Seems like a win-win situation. However, as in real life, there are no silver bullets.

In this article I'm going to focus on some problems you can encounter with AWS Lambdas.

How AWS Lambda Works

The application is run within Execution Context (EC). It is a runtime environment which sets up all necessary dependencies for the code to run — like all the database connections and HTTP endpoints. Bootstrapping EC takes some time, which may introduce significant latency in the so-called cold start.

Cold Start

AWS Lambda is warmed up and ready to receive requests when in use, plus some idle time after request finished processing. In other words, when not used for a longer period, it is down and needs to be rebooted. That is when cold start happens — everytime EC needs to be started. Once we need an instant response from the service, latency in bootstrap might not be acceptable. Depending on the technology used, for example in the case of java it might take a significant amount of time (in seconds). You may think of solving that issue by not letting the lambda go down — and you are right, it is possible to ping it, just to keep it alive. But then, isn't it an ordinary server/virtual machine then?

Callbacks and /tmp

The developer may have an impression that each time Lambda is invoked, new Execution Context is created. Not always true, as EC is on even if Lambda is not being used for some time (could not find the exact time in the specification) so that Lambda can reuse EC. What is more, each Lambda has access to the /tmp directory where data will be shared between processes. If for any reason you want to use some mutable state stored in the /tmp it might be modified by another request. If it comes to the callbacks itself, when Lambda finishes processing requests and the callback is not complete yet, it will not be forced to stop, but will finish. This could lead to another issue.

Gateway Timeout

AWS Lambdas cannot be invocated directly. To do so, it must be attached to the AWS Gateway Resource. Here we can hit AWS platform limit. One of the hard limits (cannot be changed, even if you contact AWS) for AWS Gateway response time is 29s after that client will receive Timeout response. More interesting is that, it does not mean that invoked lambda will terminate. No. The lambda once started processing request will keep going until finished (or will hit different lambda execution timeout, more on that in next point), only the Gateway inform client with a timeout message. The client will be misinformed as from one point he/she will think that some timeout happened, but does not know, the Lambda is still being processed.

Imagine such a situation. The user clicks the button to submit a form. Processing is slow and the user will hit the Gateway timeout limit. The user is confused, so tries to resend that form a few more times, maybe even succeeding in the end. Each of those failed (from the user perspective) requests will process and possible multiple duplicated resources could be created.

Lambda Timeout

Right now the maximal time for the request is set to 5 minutes. As in the case of AWS Gateway API, this threshold cannot be modified. You can easily reach the timeout in the situation, when you invoke Lambda from Lambda, or when used for a long time processing job, like generating reports.

Tight Coupling

Final point: tight coupling with AWS. It is said that AWS is "serverless," however, the code still is run on a server, but now it does not depends on us. We do not have any power over the process, how it will be managed, or any updates of the container. Deciding on AWS Lambda we resign from all of those. And what if for any political/strategic reasons your company decides to move to a different cloud provider or even to on-premise servers? You need to rewrite all the code.

Of course, there are more issues like proper logging, tracking request etc., but those are more or less common for all the distributed systems (except for logging, the only option is to use CloudWatch service).


Just to say it clearly. I just wanted to point that AWS Lambda might not be a perfect replacement for application on EC2/virtual machine/container, etc. I still think that it is useful when we want to react to some event — send SMS notification when a user logs into Cognito, or for prototyping to reduce cost. Simply use technology for what it was designed for.

Hope you like the article, just want to ask for your opinion. Do you use heavily the AWS Lambda in your application? If yes, what for?

Learn how to auto-discover your containers and monitor their performance, capture Docker host and container metrics to allocate host resources, and provision containers.

aws ,aws lambda ,antipatterns ,limitations ,cloud

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}