What Is Serverless Computing?
What Is Serverless Computing?
Want to learn more about serverless computing? Check out this post on the major components of serverless computing and the top serverless platforms.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Often times, all these new buzzwords will confuse us. The bottom line definition of serverless computing is “serverless computing allows you to build and run applications and services without thinking about servers."
Serverless computing is the abstraction of servers, infrastructure, and operating systems. When you build serverless apps, you don’t need to provision and manage any servers, so you can take your mind off infrastructure concerns.
The term “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. Serverless allows developers to shift their focus from the server-level to the task-level.
Five Key Characteristics
In order to satisfy the requirements of being a serverless technology, the following five characteristics need to be addressed.
1. No server management: There is no need to provision or maintain any servers. There is no software or runtime to install, maintain, or administer.
2. Flexible event-driven scaling: This is one of the important characteristics of serverless; you shouldn’t worry about scaling your solution if a demand arises (see the below Facebook example). Typically, your solution will scale based on events, timer, or incoming actions. For example, this could be when you execute code every second, execute code when an HTTP web endpoint is called, execute code when a new file is uploaded to a blob storage, or some of the simple use cases.
3. Highly available: Serverless applications have built-in availability and fault tolerance. You don’t need to architect for these capabilities since the services running the application provide them by default.
4. No idle capacity: You don’t have to pay for idle capacity. If your code is not running, you shouldn’t pay for it.
5. Micro-billing: When your code is executed, you pay per execution. Typically, the vendors calculate this based on memory consumption and the time it takes for execution. For example, if your code requires 200 MB of RAM and it takes three seconds to complete, you will only need to pay for this resource.
Key Technologies by Cloud Vendors
When it comes to serverless, there are a set of core technologies and supporting technologies. The core technologies fall under the pure serverless model and satisfy the five key characteristics highlighted above. However, the core technologies alone will not be able to support all scenarios. They typically depend on some supporting technologies, like storage, message queuing, database, API gateway, etc.
Core Technologies for Serverless
When it comes to core serverless technologies, it needs to satisfy these three characteristics:
- A scalable platform to execute a piece of code
- A scalable workflow solution for stitching together discrete code executions.
- A scalable pub/sub event routing engine
Here are the core serverless technologies from each vendor:
Azure Logic Apps
Azure Event Grid
Amazon Step Functions
Supporting Technologies for Serverless
The supporting technologies for serverless will fall under the PaaS (platform as a service) category and they will not fully satisfy all the five key characteristics. The main factor is that they will have base platform charges. If you are using Azure API Management, then there is a base cost of $100/month (ex) to provide that service.
Here are supporting technologies from each vendor:
We highlighted only the top three cloud vendors here, since they are kind of leading the way with both cores as well as supporting technologies. IBM Bluemix (Openwisk) is currently lagging behind in this space. In addition, there are various other vendors, like Iron.io, competing in this area.
In today’s world, there are various use cases for serverless technologies. Let’s take a simple example — imagine you are the CTO of Facebook. One of the key capabilities of Facebook is allowing users to upload their photos and videos and share them with their friends. Facebook itself is a massive platform, running the platform and scaling it for 2 billion + users will require a huge infrastructure (including many servers). However, this particular functionality can be implemented seamlessly using the serverless model, ignoring the entire complexity of the Facebook application.
The flow will look like this:
You can implement the functionality as Amazon Lambda or Azure Function or Google Function, which automatically exposes you to an HTTP endpoint. From the front-end web or mobile application, you simply upload the content via that endpoint. The data gets stored in the relevant backend. This is a simplified example. Ideally, you’ll add an API gateway and security to the HTTP endpoints, which can also be achieved seamlessly by using either Azure API Management or Amazon API Gateway. Security can be provided by Azure AD B2C or a third party, like Auth0.
The important point to note here is that this particular functionality needs to scale for over 2 billion users who are uploading millions of photos and videos every minute. If serverless technologies do not exist, this particular capability will take months to implement with huge upfront cost. Whereas with Serverless, it will be few weeks effort with near zero upfront cost.
What Is the Difference Between PaaS and Serverless?
This is one of the common questions we come across when discussing serverless. In simple terms, you can see serverless as the evolution from PaaS. As briefly mentioned before, for a technology to be classified as pure serverless, it needs to satisfy the five key characteristics. When it comes to PaaS, it will fail mainly on the “No idle capacity” and “Micro Billing” characteristics. Azure Cosmos DB and Amazon Dynamo DB are good examples — you need to provision those services and accept to pay some base costs. You can utilize capabilities, like auto-scaling, to grow the platform as, and when the need arises, whether it’s either a manual or automated task given to the consumer, the platform will not look after itself.
What Are the Challenges?
Manageability is one of the key challenges in going down the serverless route. If you are building a single monolithic application, it’s easy to manage and maintain. You will have a matured DevOps practices to run the application and matured CI/CD practices to take the code from development to production. Whereas if you have hundreds of small serverless discrete pieces of functionality spread across all over the place, the managing and operating of that solution becomes extremely complex, as I have explained in this article Challenges of Managing a Distributed Cloud Application.
In the past 12-18 months, the cloud vendors have invested significantly in building a maturing platform. However, when it comes to developer tools and management tools, the maturity is still lagging.
You can use tools like Serverless360 for intelligent management and monitoring to address this gap. For Vendor Lock-in, you need to be super careful which vendor you are going to choose, even though all of the top vendors like functionality — it’s a one-way street. Once you implement and go live with your solution, it’s extremely hard to port your solution from one vendor to another vendor.
Author Credits: Saravana Kumar, Microsoft Azure MVP
Published at DZone with permission of Mohan N . See the original article here.
Opinions expressed by DZone contributors are their own.