Adapting Serverless Architecture

DZone 's Guide to

Adapting Serverless Architecture

Let's start with the evolution of serverless architecture and walk through how they came to be, how it works, best practices, and the available frameworks.

· Cloud Zone ·
Free Resource

This article is featured in the new DZone Guide to the Cloud: Native Development and Deployment. Get your free copy for more insightful articles, industry statistics, and more!

The serverless architectural style challenged the status quo of software design and deployment fundamentals for achieving the optimal development, operational, and management overhead. While it inherits elementary concepts from Microservices Architecture (MSA), it has been endowed with bleeding edge architectural patterns for attaining the minimum level of hardware idling possible.

Despite the remarkable advancements it adds, adapting would need a thoughtful process for precisely mapping enterprise solution requirements to serverless computing.

Image title

The initial implementations of software systems, which were deployed on physical servers, could not optimally utilize the computation power of the underlying hardware since there could only be one instance of the operating system running at a given time. Later, after identifying the time-sharing capabilities in computing resources, multiple virtual computers were able to run on the same hardware concurrently by switching CPU and I/O operations among them.

This technological evolution led to many innovations in the industry and most importantly to the inception of the cloud. At this time, virtual machines were the most manageable, scalable, and programmable units of isolated computing environments for deploying software. Linux container technology emerged around the year 2006, when Google implemented control groups in line with the Linux kernel features.

Image title

Linux containers have been there ever since. However, only large scale, technically transcendent enterprises such as Google were able to use it at scale. Around the year 2012, the concept of the microservices architecture was introduced by a group of software architects in Europe. Later in the year 2013, Docker blazingly filled the gaps of accessibility, usability, and supporting services in the container ecosystem, and, consequently, containers started to become popular.

Linux containers opened up a new horizon for decomposing large monolithic systems into individual, self-contained services and executing them with fine-grained resource utilization. Expediting these advancements, container cluster management systems such as Kubernetes and Mesosphere started to rise in the same period for providing end-to-end Container as a Service (CaaS) capabilities.

Later in 2015, AWS took another leap ahead by introducing AWS Lambda, which could further save software deployment costs by running microservices on demand and stopping them when there is no load. This concept is analogous to the stop-start feature in energy efficient vehicles which automatically shuts down the combustion engine for reducing fuel consumption.

How Does it Work?

Despite the term “serverless” being nonsensical at first glance, its actual meaning is based on the ability to deploy software without having any involvement with infrastructure. Serverless platforms automate the entire process of building, deploying, and starting services on demand. Users would only need to register the required business functions and their resource requirements.

Image title

As it is apparent, such functions can be classified into two main types: functions that get triggered by client requests and functions that need to be executed in the background for time triggers or events. Generally, such serverless system can be implemented using a container cluster manager (CCM) with a dynamic router which could spin containers on demand. Nevertheless, it would also need to consider the latency of the router, container creation time, language support, protocol support, function interfaces, function initialization time, configuration parameter passing, providing certificate files, etc.

Even though this deployment style requires containers to be stopped when there is no load, in reality terminating containers so soon after serving requests would be costly, as there could be more requests coming in within short time intervals. Therefore more often, in serverless computing containers will get preserved for a preconfigured period of time to be reused for serving more requests. This is analogous to autoscaling behavior in PaaS platforms. Once a service is scaled up, instances will get preserved for a certain time period for processing more requests without terminating them immediately.

Read the rest of this article and a lot more in:Cloud Guide

DZone's Guide to the Cloud: Native Development and Deployment


  • Industry Research Data
  • Articles Written by Industry Experts
  • Cloud Architecture Infographic
  • Directory of the Best Tools & Solutions

cloud, microservices architecture, serverless architecture, serverless frameworks

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}