An Introduction to Serverless Computing: Part 1
An Introduction to Serverless Computing: Part 1
Part one of this series begins with a soft introduction to serverless computing, defining what it is and some of its advantages and disadvantages.
Join the DZone community and get the full member experience.Join For Free
The most hyped technology trend in recent times is Serverless Computing. Some may think (going by the name) that there are no servers involved in serverless computing. There are servers running our code, but these servers are not visible in the infrastructure and need no management, handling or provisioning by the development or operation teams.
In serverless computing, or FaaS (Function as a Service), we generally write applications/functions that focus only on one thing in particular. We then upload that application on the cloud provider, which gets invoked via different events, such as HTTP requests, webhooks, etc. In very recent times people have started referring to serverless as BaaS (Backend as a Service). BaaS and FaaS are related in their operational attributes (e.g., no resource management) and are frequently used together.
The most common use case of serverless computing is to run any stateless applications like data processing or real-time stream processing, though it can augment a stateful application. Internet of Things and chatbots are very good use cases for serverless computing.
Most often, serverless computing is referred via services offered by cloud providers. Many people call serverless as “PaaS on steroids.” If you’re a happy PaaS user, then serverless is an option to consider, but it won’t always work. Similarly, if you’re doing microservices, serverless is again something to keep an open mind about.
All major cloud providers like AWS, Google Cloud Platform, or Microsoft Azure have serverless offerings. You can also build our own serverless solutions on container orchestrators like Docker Swarm, Kubernetes, etc. We will discuss it in the second part of this series.
Major Benefits of Serverless Computing
No Server Management
When we use serverless offerings by cloud providers, no server management and capacity planning is to be done by the developers. It is all taken care of by the cloud providers. Serverless computing itself solves several problems being faced by the developers, allowing developers to prototype and develop faster. The operational costs are lower for organizations.
In serverless computing, you only need to pay for the CPU time when your applications/functions are executed. There is no charge when the code is not running. Service providers handle the infrastructure and its operations, including maintenance, security, and scalability, leading to lower operational cost for organizations. You can save time and money by removing the need to rent or buy infrastructure, its set up, capacity planning and maintenance.
We don’t need to set up or tune autoscaling. Applications are automatically scaled up/down based on demand. Scaling can happen within seconds. Whenever the load on the serverless function increases, the underlying infrastructure will make the multiple copies of the same function immediately and will distribute the requests among all the copies.
Automated High Availability and Fault Tolerance
The functions are being run on servers which are automatically deployed in various availability zones of the cloud service provider. That makes them highly available. High availability and fault tolerance automatically come from underlying providers. Developers don't need to specifically program for that.
Fast and Easy Deployments
Once everything has been set up and development starts, you can focus on the task at hand without worrying about getting tangled in the traditional workflow while running, testing and deploying the code. The functions can be tested and deployed independently, and any changes in code can be pushed through the CI/CD integrations. Apart from faster deployment of apps you can easily roll out updates and ensure they get reflected in the program faster than ever before. Serverless architecture can be used in Agile-friendly product development. FaaS platforms can let developers focus on the code and make their product feature-rich through Agile build, test and release cycles.
It may seem that serverless and microservices are somewhat simiular. Serverless computing can be integrated into microservice-oriented solutions so you can break down complex applications into small and easily manageable modules that make the entire process of developing and testing software programs Agile. You can write functions for each service and it can run independently. The resources are allocated only when a specific event occurs. They stay idle until a task is assigned.
Limitations of Serverless Computing
One major drawback of serverless computing is the dependency on the service provider. Functions are written as per the instructions given by the vendors. So, the same application/function may not behave in the same way if you change the provider and changing your provider can incur additional expenses. Everything is controlled by the vendor, so the organizations don’t have complete control of the ecosystem.
Multitenancy and Security
Although not a big concern for most of us, you cannot be sure what other applications/functions run beside you, which brings multitenancy and security concerns. If the remote server crashes by someone else’s function, it could inadvertently affect yours as well. This is an issue when several serverless applications run on the same physical server. Cloud vendors are optimizing on this front to introduce the latest tech to solve this issue.
While the functions in serverless are called when they are triggered by any defined action and are run for the duration of the execution sequence, if the application is not in use, the service provider can take it down, which will affect the performance. It will start again when triggered, but the process increases delay sometimes. Latency and concurrency are issues arbitrating throughout in serverless architecture. Latency is the time taken to start processing a task, while concurrency means the number of independent tasks it can run at a time. The latency requirements in the processing could be highly variable, so it is important to define them properly to get the best out of the serverless platform.
All FaaS providers have certain limitations on the allowed resources for the functions. They put resource limits for serverless functions. Therefore, it is safer not to run high-performance, resource-intensive workloads using serverless solutions. The cloud provider enforces task memory and processing limits, sometimes having too many tasks at a single time could mean exceeding the connection time. This could block other tasks from running properly and within the desired timeframe.
Monitoring and Debugging
Logging of the application is a challenge in serverless architecture. TO troubleshoot any issue in your app you may need to take a look at various places. You may need to integrate third-party API’s to see more granular reporting. Monitoring of serverless applications is also a huge issue. Though all the cloud vendors provide their own monitoring services, those are not as granular as we can monitor the applications hosted on traditional servers.
As the code written is “just code” which you wrote, and for the most part, there aren’t any custom libraries you have to use or implement, unit testing is quite simple. On the other hand, integration testing is a huge challenge. Testing all the functions together is very time consuming and takes a great effort. You are also dependent on the cloud vendor for sort of integration testing.
This is the first part of the series. The second part will be available soon. In the second part, we will talk about the vendors and tools available for Serverless/FaaS.
Published at DZone with permission of Gurpreet Singh . See the original article here.
Opinions expressed by DZone contributors are their own.