Serverless Microservices (and Minimal Ops): Current Limitations of AWS Lambda
"At the end of the day, no one really cares about machines, containers, we just care about logic and results.” –TJ Holowaychcuk
Join the DZone community and get the full member experience.Join For Free
Serverless architecture is a movement toward pure development and focus on core competencies. Today it is more critical than ever to focus on developer empowerment, all things considered with the current landscape of limitations. The journey for enterprises to go cloud-native takes on many facets of challenges, and along that journey involves conversations around ops and microservices.
What do we mean by “serverless”?
The advent of functions/endpoints-as-a-service (e.g. AWS Lambda, hook.io, Tonic) will help foster a new computing paradigm; one in which lower-level constructs like severs, containers, operating systems and more can largely be abstracted away.
A “serverless” approach to API deployment involves containerization, webhooks, virtualization and reciprocity, with no infrastructure (servers, deployments or installed software) required. Microservices can be used as an abstracted resource that allows developers to work more effectively, and focus more on development and less on operations.
At the heart of a microservice is the running code itself. Microservices also contain their own runtime, so they don't need to run on an ESB. Using a serverless approach, you can distribute the workloads to optimize runtime. Microservices can be used to create this architecture, and scale independently to meet outward-facing API interface agreements. Microservices architecture encourages to build small, focused subsystems which can be integrated into the whole system preferably using REST protocol.
With any change in product development, whether that be the move to Agile / Scrum planning practices or new methods for DevOps, a business can expect to incur a significant upfront investment in changing its patterns and system architecture. Time invested or scope of effort can be used to calculate the cost to the business. In the long-run, using serverless microservices can mean significant cost savings as businesses eliminate or reduce opps and use of servers.
In the future, using programs like AWS Lambda, Google Computes, Tonic and others will change the paradigm and operational requirements to complete microservices. Combined with an API Gateway, businesses can quickly build microservices that are very powerful and scalable, but will look different than traditional microservices.
Using said services allows startup and enterprise teams alike to focus on pure development and their core competencies. Services like AWS Lambda offer aggressive pay-for-use pricing, sub-second metering, continuous scaling, built-in fault tolerance, flexible resource models and much more. Open source frameworks like Apex make interactions with such services a breeze, and provide developer tooling to maximize productivity.
Diving into AWS Lambda
Here's a snapshot of some code you'd like to run in a 'serverless' paradigm:
A function with 256MB of RAM
Each invocation runs under 100ms
One hundred million invocations (now that is a heavy used API!)
The total cost would be $4.17 (minus the free tier credit AWS gives you).
Pain Points with AWS Lambda
The current limitations imposed by Lambda are sometimes a hurdle and require workarounds. Some workarounds are fairly tedious (for instance, separate AWS accounts per microservice/environment) and extreme.
Payload and Response Limits
There is, currently, a hard limit on just how large your payloads and responses can be. If you need to exceed these values, fairly dramatic workarounds are needed (e.g. EC2/VPC).
Invoke request body payload size: 6mb
Invoke response body payload size: 6mb
Invocation Duration Limits
Long running functions are generally not a great fit for Lambda, both due to the pricing model and also the hard upper limit on function duration.
Maximum execution duration per request: 300 seconds
While the deployment limits should work for most scenarios, it does mean in certain circumstances that microservices must truly embrace the Unix philosophy, in that they are targeted and focused on one problem alone. Because of limits Lambda imposes, and in tooling, it is wise to have separate AWS accounts per microservice and environment.
|Lambda function deployment package size (.zip/.jar file)||50 MB|
|Size of code/dependencies that you can zip into a deployment package (uncompressed zip/jar file)||250 MB|
|Total size of all the deployment packages that can be uploaded per region||75 GB|
Part of the hype of microservices-washing (think ‘whitewashing,’ or to hide some inconvenient truth), is saying that you’re fully built with microservices or a serverless-architecture that will vastly differ from company to company. But the decision to implement microservices should be an architectural decision from day one.
Essentially we want to build the right product and build it the right way to answer questions: how do we test between integrations? Between services? How do we perform higher level integration tests for customer facing apps?
The sooner the bug is discovered, the better. What can start as a $1 cost to fix can quickly get amount to a $10,000, if it is not fixed until the customer finds it. Therein lies the true lesson of serverless architecture: there are no shortcuts to good architecture.
Opinions expressed by DZone contributors are their own.