Serverless vs. Containers
It's serverless computing against containerization in this battle of components, traits, and use cases.
Join the DZone community and get the full member experience.Join For Free
All modern applications are being developed using either serverless or containers technology. However, it is always difficult to choose the one best suitable for a particular requirement.
You may also enjoy: Serverless Computing vs. Containers
In this article, we will try to understand how these two are different from each other and in what scenario we can use one or the other.
Let us first start with understanding the basics of serverless and container technology.
What Is Serverless Computing?
Serverless is a development approach that replaces long-running virtual machines with computing power that comes into existence on demand and disappears immediately after use.
Despite the name, there certainly are servers involved in running your application. It’s just that your cloud service provider, whether it’s AWS, Azure, or Google Cloud Platform, manages these servers, and they’re not always running.
It attempts to resolve issues such as:
- Unnecessary charges for keeping the server up even when we are not consuming any resources
- Overall responsibility for maintenance and uptime of the server.
- Responsibility for applying the appropriate security updates to the server.
- As our usage scales, we need to manage to scale up our server as well. And, conversely, manage to scale it down when we don’t have as much usage.
What Are Containers?
A container is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, and settings.
Containers solve the problem of running software when it has been moved from one computing environment by essentially isolating it from its environment. For instance, containers allow you to move software from development to staging and from staging to production and have it run reliably regardless of the differences of all the environments.
Serverless vs. Containers
To start with, it’s worth saying that both — serverless and containers are elements of an architecture that is designed for future changes, and for leveraging the latest tech innovations in cloud computing. While many people often compare Docker containers and serverless computing, the two have very little in common. That is because both technologies aren’t the same thing and serve a different purpose. First, let’s go over some common points:
- Less overhead
- High performance
- Requires less interaction at the infrastructure level to do provisioning.
Although serverless is more innovative technology than containers, they both have their disadvantages and of course, benefits that make them both useful and relevant. So let’s review the two.
|Longevity||Lambda Functions are "short-lived." Once it's executed, a function will spin down. Lambda has a timeout threshold of 15 minutes. Long-running workloads cannot run on this. However, Step Functions can be used to break the long-running application logic into smaller steps (functions) and run it. But, it might not apply to all kinds of long-running application.||ECS provides "long-running" containers. It can run as long as you want.|
If an application has high throughput, Lambda would cost more compared to container solutions. The reason is, it would need a higher resource like Memory and execution time will be high. As Lambda charges based on memory and execution time, the cost will increase in the multiplication factor. The second reason is that 1 function can have maximum 3GB Memory and it might not be able to handle the high throughput and would need concurrent execution which may introduce latency due to cold start time.
For lower Throughput, Lambda is a good choice in terms of cost, performance, and time to deploy.
ECS uses EC2 instances to host the applications. EC2 can handle high throughput more effectively than Serverless Functions as it has different types of instance types which can be used as per throughput requirement. Its cost will be comparatively less. Latency will also be better if a single EC2 instance can also handle such kind of load.
For lower throughput also, EC2 works very well. While comparing with Lambda, need to consider other factors described in this table.
Lambda has auto-scaling as a built-in feature.
Containers don't have any constraints on scaling. However,
|Time to Deploy||Lambda Functions are smaller in size and take significantly less time compared to containers. It takes milliseconds to deploy compared to seconds in container case.||Containers take significant time initially to configure and set up as it would require system setting, libraries. However, once it is configured, it takes seconds to deploy.|
In Serverless Architecture, infrastructure is not used unless the application is invoked. So, it will charge only for the server capacity that their application use during the uptime. Now, this can be cost-effective in some scenarios like:
| Containers are constantly running, and therefore cloud providers have to charge for the server space even if no one is using the application at the time.
If throughput is high, containers are better cost-effective compared to Lambda.
While comparing with EKS cluster, ECS cluster is free.
|Security||For Lambda, system security is taken care of by AWS itself. It only needs to handle application-level security using IAM roles and policies. However, if Lambda has to run in a VPC, then VPC level security has to apply here.|| For containers, we are also responsible for applying the appropriate security updates to the server. This includes patching OS, upgrades to software and libraries.
ECS supports IAM Roles for Tasks which is great to grant containers access to AWS resources. For example, to allow containers to access S3, DynamoDB, SQS, or SES at runtime. EKS doesn't provide IAM level security at pods level.
|Vendor Locking||Serverless function brings vendor lock-in because, if you need to move from Lambda Functions to Azure Functions, it would need significant changes at code and configuration level.||Containers are designed to run on any cloud platform which supports container technologies. So it brings the benefit of building once and running anywhere. However, the services being used for Security — IAM, KMS, Security Groups, and others are tightly coupled with AWS. It would need some rework to move this workload to other platforms.|
|Infrastructure Control||If a team doesn't have infrastructure skills, Lambda will be a good option. The team can concentrate on business logic development and let AWS handle the infrastructure.||With containers, we get full control of server, OS, and network components. We can define and configure within the limitations put by cloud providers. So, if an application/system needs fine-grained control of infrastructure, this solution works better.|
|Maintenance||Lambda doesn't need any maintenance work as everything at the server level is being taken care of by AWS.||Containers need maintenance like patching and upgrading and that would require skilled resources as well. So, keep this in mind while choosing this architecture for deployment.|
|State persistence||Lambda is designed for serverless so it will not maintain any state. It is short-lived. Because of this reason, we cannot use caching and that may cause latency problem.||Containers can leverage the benefits of caching.|
|Latency & Startup Time||For Lambda, cold start and warm start time are key factors to be considered, as they may cause latency as well as add to the cost of executing functions.||
Containers being running always doesn't have cold/warm start time. Also, using caching latency can be reduced.
Compared to EKS, ECS doesn't have any proxy concept at the node level. Load balancing is just between ALB and EC2 instances. So no extra hop of latency.
|VPC &ENI||If Lambda is deployed in a VPC, its concurrent execution is limited by ENI capacity of the subnets.|| The number of ENIs per EC2 instance is limited from 2 to 15 depending on the instance type.
In ECS, each task is assigned only a single ENI so we can have a maximum of 15 tasks per EC2 instance with ECS.
|Monolith Applications||Lambda is not fit for a monolithic application. It cannot run complex type of application||ECS can be used to run a monolith application|
|Testing||Testing is difficult in serverless based web applications as it often becomes hard for developers to replicate the backend environment in a local environment.||Since containers run on the same platform where they are deployed, it’s relatively simple to test a container-based application before deploying it to the production.|
|Monitoring||Lambda monitoring can be done through CloudWatch, X-Ray. Need to rely on Cloud vendor to provide monitoring capabilities. However, infrastructure level monitoring is not required in this case.||
Container monitoring would require to capture Availability, System Errors, Performance and Capacity metrics to configure HA for the container applications.
When to Use Serverless
Serverless Computing is a perfect fit for the following use cases:
- If the application team doesn’t want to spend much time thinking where your code is running and how.
- If the team doesn't have skilled infrastructure resources and worried about the cost of maintenance of servers and resources application consumes, serverless will be a great fit for such use-case.
- If the application's traffic pattern changes frequently, it will handle it automatically. It will also even shut down when there is no traffic at all.
- Serverless websites and applications can be written and deployed without handling the work of setting up infrastructure. As such, it is possible to launch a fully-functional app or website in days using serverless.
- If a team needs a small batch job which can be finished within Lambda limits, its a good fit to use.
When to Use Containers
Containers are best to use for application deployment in the following use cases:
- If the team wants to use the operating system of their own choice and leverage full control over the installed programming language and runtime version.
- If the team wants to use software with specific version requirements, containers are great to start with.
- If the team is okay in bearing the cost of using big yet traditional servers for anything such as Web APIs, machine learning computations, and long-running processes, then they might also want to try out containers as well (they will cost you less than servers, anyways).
- If the team wants to develop new container-native applications.
- If the team needs to refactor a very large and complicated monolithic application, then it’s better to use the container as it’s better for complex applications.
In a nutshell, we learned that both the technologies are good and can complement each other rather than competing. They both solve different problems and should be used wisely.
Opinions expressed by DZone contributors are their own.