Serverless Computing: Ready for Prime Time
Is serverless computing ready for the big time finally? Will it be Cloud 2.0?
Join the DZone community and get the full member experience.Join For Free
The cloud ushered in a fundamental change in the way applications are deployed. Instead of worrying about how much-dedicated hardware to order for your data center, you simply spin up as many servers as you need to do the job, then decommission them once the job is done. Yet, you still have to plan how many servers you will need at each stage of processing; you still have to remember to decommission them when done, and you pay for the entire virtual machine during the time you use it, even though the server may be idle for most of that time.
The next stage in the evolution of the cloud is serverless computing. After all, the servers you order in the cloud are only a means to an end — they provide the plumbing needed to run your code in response to various events. In the serverless computing paradigm, you supply the set of events and the code to run when each event occurs. The cloud takes care of all the rest: identifying when an event has occurred and, in response to that event, deploying the relevant code to a server, running the code, and then decommissioning the server. The cloud also provides elasticity, i.e., it will automatically run as many instances of your code as needed to handle the workload.
Amazon was the first cloud provider to roll out serverless computing in 2014 with its Lambda service. AWS Lambda can automatically run code in response to multiple events, such as database table updates, modification of Amazon Simple Storage Service (S3) objects, and event notifications. Since then, Google Cloud Platform has rolled out Google Cloud Functions, IBM has released an open-source serverless platform called OpenWhisk, and Microsoft Azure provides its own take on serverless computing called Azure Functions. Today, support for serverless computing is ubiquitous on all major cloud platforms, which indicates how popular this paradigm has become.
The major attraction of serverless computing is cost. Suppose you have developed a cloud-based web portal that occasionally receives requests. For each request, the portal does a bit of processing and returns a response. Before serverless computing, this portal would require a dedicated server on the cloud, ready and waiting to receive requests 24-7, even though it actually processes requests for only a few minutes each day. Alternatively, if you design this portal as a serverless application where the event is a new request and the corresponding code processes the request, then you only pay for the few moments per day when requests arrive, as opposed to paying for a dedicated server. It's no wonder then that many developers report a tenfold drop in their AWS bill by re-architecting their applications to serverless programming.
Before you jump on the bandwagon, though, be aware of the limitations of serverless programming:
- High latency: If you use a dedicated cloud server, your code is already up and running when an event arrives, so the event can be processed within milliseconds. If you use serverless computing, then it can take several hundred milliseconds from the time the event occurs until it is processed. You must wait until the cloud platform allocates a server to your code, deploys the code, and starts the runtime environment needed to run the code (e.g., a Java Virtual Machine). This makes serverless computing a poor choice for applications that require quick single digit millisecond response to events.
- Resource limits: Each cloud platform places limits on the server size available to run a serverless function, as well as on the total execution time of the code. For example, Amazon Lambda limits a serverless function to 1.5 GB of memory and no more than five minutes of execution time. This makes serverless programming a poor choice for applications that are memory intensive or require a long time to complete.
- Development challenges: In a traditional procedural or object-oriented software architecture, a program consists of code that executes serially. A serverless program, on the other hand, consists of a set of code fragments whose execution order is determined entirely by the order in which events occur. This presents a challenge to the developer because many of these events (e.g., a change to an Amazon S3 object) can only be generated in the cloud — there are currently no good tools to emulate cloud events in a local development environment. This can reduce developer productivity, because coding, especially at the initial stages, is far easier in the local desktop environment than in the cloud.
- Testing challenges: It's not enough to individually test the code associated with each event. To implement a real-world use case that accomplishes useful work, you have to simulate the flow of events in the correct order as well as all other feasible orders. This requires a new set of test tools that are still evolving.
Serverless computing has matured since it was first introduced by Amazon in 2014, and today is used in production by many enterprises. It is an excellent choice for applications where:
- The flow of the application can be expressed as responses to a series of events.
- Events occur sporadically. If your application is going to be constantly bombarded with events, it will be cheaper to rent an entire dedicated server rather than paying per event.
- Event processing is not resource intensive, e.g., it does not require a lot of time or memory.
- High latency (having to wait several seconds before an event is processed) is acceptable.
The bottom line — for the right use case, serverless computing is an excellent choice that is ready for prime time and can provide significant cost savings.
Published at DZone with permission of Moshe Kranc, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.