Serverless Architectures on AWS
Serverless Architectures on AWS
Check out this article excerpted from Peter Sbarski and Sam Kroonenburg's book, Serverless Architectures on AWS, and learn about the five principles of serverless architecture.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Going Serverless by Peter Sbarski with Sam Kroonenburg
If you ask software developers what software architecture is, you might get answers ranging from “it’s a blueprint or a plan” to “a conceptual model” to “the big picture.” It is undoubtedly true that architecture, or lack thereof, can make or break software. Good architecture may help to scale a web or mobile application, and poor architecture may cause serious issues that necessitate a costly rewrite. Understanding the implication of choice regarding architecture and being able to plan ahead is paramount to creating effective, high-performing, and ultimately successful software systems.
This article is about why we think serverless is a game changer for software developers and solution architects. It introduces key services such as AWS Lambda and describes the principles of serverless architecture to help you understand what makes a true serverless system.
What’s in a Name?
Before we start it should be mentioned that the word serverless is a bit of a misnomer. Whether you use a compute service such as AWS Lambda to execute your code or interact with an API, there are still servers running in the background. The difference is that these servers are hidden from us. There is no infrastructure for us to think about. No way to tweak the underlying operating system. Someone else takes care of the nitty-gritty detail of infrastructure management, freeing our time for other things. Serverless is about running code in a compute service and interacting with services and APIs to get the job done.
How We Got to Where We Are
If you look at systems powering most of today’s web-enabled software you will see back-end servers performing various forms of computation and client-side front ends providing an interface for users to operate via their browser, mobile, or desktop device.
In a typical web application, the server accepts HTTP requests from the front end and processes requests. Data might travel through numerous application layers before being saved to a database. The back end, finally, generates a response—it could be in the form of JSON or fully rendered markup—which is sent back to the client (figure 1). Naturally, most systems are more complex once elements such as load balancing, transactions, clustering, caching, messaging, and data redundancy are taken into account. Most of this software requires servers running in data centers or in the cloud that need to be managed, maintained, patched, and backed up.
Figure 1. This is a basic request-response (client-server) message exchange pattern that most developers are familiar with. There is only one web server and one database in this figure. Most systems are much more complex.
Provisioning, managing, and patching of servers is a time-consuming task that often requires dedicated operations people. A non-trivial environment is hard to set up and operate effectively. Infrastructure and hardware are a necessary component of any IT system but they are often also a distraction from what should be the core focus—solving the business problem.
Over the past few years, technologies such as Platform-as-a-Service (PaaS) and containers have appeared as potential solutions to the headache of inconsistent infrastructure environments, conflicts, and server management overhead. PaaS is a form of cloud computing that provides a platform for users to run their software while hiding some of the underlying infrastructure. To make an effective use of PaaS, developers need to write software that targets the features and capabilities of the platform. Moving a legacy application designed to run on a standalone server to a PaaS service may require additional development effort due to the ephemeral nature of most PaaS implementations. Still, given a choice, a lot of developers would understandably choose to use PaaS rather than more traditional, more manual, solutions thanks to reduced maintenance and platform support requirements.
Containerization is a way of isolating an application with its own environment. It is a lightweight alternative to full-blown virtualization. Containers are isolated and lightweight but they need to be deployed to a server—whether in a public or private cloud, or onsite. They are an excellent solution when dependencies are in play but they have their own housekeeping challenges and complexities. They are not as easy as simply being able to run code directly in the cloud.
Finally, we make our way to Lambda, which is a compute service from Amazon Web Services. Lambda can execute code in a massively parallelized way in response to events. Lambda takes your code and runs it without any need to provision servers, install software, deploy containers, or worry about low-level detail. AWS takes care of provisioning and management of their Elastic Compute Cloud (EC2) servers that run the actual code and provides a high-availability compute infrastructure—including capacity provisioning and automated scaling—that the developer doesn’t need to think about. The words serverless architectures refer to these new kinds of software architectures that do not rely on direct access to a server to work. By taking Lambda and making use of various powerful single-purpose APIs and web services, developers can build loosely coupled, scalable, and efficient architectures quickly. Moving away from servers and infrastructure concerns, as well as allowing the developer to primarily focus on code is the ultimate goal behind serverless.
Service-oriented Architecture & Microservices
Among a number of different system and application architectures, service-oriented architecture (SOA) has a lot of name recognition among software developers. It is an architecture that clearly conceptualized the idea that a system can be composed of many independent services. Much has been written about SOA but it remains controversial and misunderstood because developers often confuse design philosophy with specific implementation and attributes.
SOA doesn’t dictate the use of any particular technology. Instead, it encourages an architectural approach in which developers create autonomous services that communicate via message passing and, often, have a schema or a contract that defines how messages are created or exchanged. Service reusability and autonomy, composability, granularity, and discoverability are all important principles associated with SOA.
Microservices and serverless architectures are spiritual descendants of service-oriented architecture. They retain a lot of the aforementioned principles and ideas while attempting to address the complexity of old-fashioned service-oriented architectures.
There has been a recent trend to implement systems using microservices architecture. Developers tend to think of microservices as small, standalone, fully independent services built around a particular business purpose or capability. A microservice may have an application tier with its own API and a database.
Ideally, microservices should be easy to replace, with each service written in an appropriate framework and language. The mere fact that microservices can be written in different general-purpose or domain-specific languages (DSL) is a drawing card for many developers. Although there are benefits that can be gained from using the right language or a specialized set of libraries for the job, it can often be a trap too. Having a mix of languages and frameworks can be hard to support and leads to confusion and difficulties down the road without a strict discipline.
Each microservice may maintain its state and store data, which adds to the complexity of the system. Consistency and coordination management can become an issue too, because state must often be synchronized across disparate services. Microservices can communicate indirectly via a message bus or directly by sending messages to one another.
It can be argued that serverless architecture embodies a lot of principles from microservices too. After all, depending on how you design the system, every compute function could be considered to be its own standalone service. However, you don’t need to fully embrace the microservices mantra and develop every function or service around a particular business purpose, maintain its state, and so on.
Serverless architectures give you the freedom to apply as few or as many microservices principles as you would like without forcing you down a single path.
Software design has evolved from the days of code running on a mainframe to multitier systems of today where the presentation, data, and application/logic tiers feature prominently in many designs. Within each tier, there may be multiple logical layers that deal with particular aspects of functionality or domain. There are also cross-cutting components, such as logging or exception-handling systems, that can span numerous layers. The preference for layering is understandable. Layering allows developers to decouple concerns and have more maintainable applications.
However, the converse can also be true. Having too many layers can lead to inefficiencies. A small change can often cascade and cause the developer to modify every layer throughout the system, costing a lot of time and energy in implementation and testing. The more layers there are, the more complex and unwieldy the system might become over time. Figure 2 shows an example of a tiered architecture with multiple layers.
Figure 2. A typical three-tier application is usually made up of a presentation, application, and data tiers. In a tier there may be multiple layers that have specific responsibilities. A developer can choose how layers will interact with one another. This can be strictly top-down or in a loose way where layers can bypass their immediate neighbors to talk to other layers. The way layers interact will affect performance, dependency management, and application complexity. Then there is also functionality that spans multiple layers—this is referred to as a cross-cutting concern.
Serverless architectures can actually help with the problem of layering and having to update too many things. There is room for developers to remove or minimize layering by breaking the system into functions and allowing the front end to securely communicate with services and even the database directly, as shown in figure 3. All of this can be done in an organized way to prevent spaghetti implementations, and dependency nightmares by clearly defining service boundaries, allowing Lambda functions to be autonomous, and planning how functions and services will interact.
Figure 3: In a serverless architecture there is no single traditional back end. The front end of the application communicates directly with services, the database, or compute functions via an API gateway. Some services, however, must be hidden behind compute service functions where additional security measures and validation can take place.
A serverless approach doesn’t solve all problems, nor does it remove the underlying intricacies of the system. However, when it is implemented correctly it can provide opportunities to reduce, organize, and manage complexity. A well-planned serverless architecture can make future changes easier, which is an important factor for any long-term application. The next section discusses the organization and orchestration of services in more detail.
Tiers vs Layers
There is confusion among some developers about the difference between layers and tiers. A tier is a module boundary that exists to provide isolation between major components of a system. A presentation tier that is visible to the user is separate from the application tier which encompasses business logic. In turn, the data tier is another separate system that can manage, persist and provide access to data. Components grouped in a tier can physically reside on different infrastructure.
Layers are logical slices that carry our specific responsibilities in an application. Each tier can have multiple layers within it responsible for different elements of functionality such as domain services.
Principles of Serverless Architectures
There are five principles of serverless architecture that describe how an ideal serverless system should be built. Use these principles to help guide your decisions when you create serverless architecture.
Use a compute service to execute code on demand (no servers)
Write single-purpose stateless functions
Design push-based, event-driven pipelines
Create thicker, more powerful front ends
Embrace third-party services
Let’s take a look at each of these principles in more detail.
Use a Compute Service to Execute Code on Demand
Serverless architectures are a natural extension of ideas raised in SOA. In serverless architecture all custom code is written and executed as isolated, independent, and often granular functions that are run in a stateless compute service such as AWS Lambda. Developers can write functions to carry out almost any common task, such as reading and writing to a data source, calling out to other functions, and performing a calculation. In more complex cases, developers can set up more elaborate pipelines and orchestrate invocations of multiple functions. There might be scenarios where a server is still needed to do something. These cases, however, may be far and few between and as a developer you should avoid running and interacting with a server if possible.
So, what is Lambda exactly?
Write Single-purpose Stateless Functions
As a software engineer, you should try to design your functions with the Single Responsibility Principle (SRP) in mind. A function that does just one thing is more testable, robust, and leads to fewer bugs and unexpected side effects. By composing and combining functions and services together in a loose orchestration, you can build complex back end systems that are still understandable and easy to manage. A granular function with a well-defined interface is also more likely to be re-used within a serverless architecture.
Code written for a compute service such as Lambda should be created in a stateless style. It must not assume that local resources or processes will survive beyond the immediate session. Statelessness is very powerful because it allows the platform to quickly scale to handle an ever-changing number of incoming events or requests.
Design Push-based, Event-driven Pipelines
Serverless architectures can be built to serve any purpose. Systems can be built serverless from scratch, or existing monolithic applications can be gradually re-engineered to take advantage of this architecture. The most flexible and powerful serverless designs are event-driven. Figure 4 shows how we could build an event-driven, push-based pipeline by connecting Amazon’s Simple Storage Service (S3), Lambda, and Elastic Transcoder together.
Building event-driven, push-based systems will often reduce cost and complexity (you will not need to run extra code to poll for changes) and potentially make the overall user experience smoother. It goes without saying that while event-driven, push-based models are a good goal, they might not be appropriate or achievable in all circumstances. Sometimes you will have to implement a Lambda function that polls the event source or runs on a schedule.
Figure 4. A push-based pipeline style of design works well with serverless architectures. In this example, a user uploads a video to be transcoded to a different format. The upload creates an event that triggers a Lambda function. This function creates a transcode job. The transcode job is submitted to the transcode video service and a new video is created and then saved to another S3 bucket. The process of saving a new video triggers another Lambda function that in turn updates the database. A new entry in the database triggers our final function, which creates a notification and invokes a notification service for dispatch. In this example all function and services are responsible for just one action and the overall orchestration is easy to adjust.
Create Thicker, More Powerful Front Ends
It is important to remember that custom code running in Lambda should be quick to execute. Functions that terminate sooner are cheaper, as Lambda pricing is based on the number of requests, duration of execution, and the amount of memory allocated. Having less to do in Lambda is cheaper. Furthermore, having a richer front end that can invoke services can be conducive to a better user experience. Fewer hops between online resources and reduced latency will result in a better perception of performance and usability of the application.
Digitally signed tokens can allow front ends to communicate with disparate services, including databases directly. This is in contrast to traditional systems where all communication flows through via the back end server. Having the front end communicate with services helps to create systems that need far fewer hops to get to the required resource.
Not everything, however, can or should be done in the front end. There are secrets that cannot be trusted to the client device. Processing a credit card or sending emails to subscribers must be done only by a service that runs outside of the end-user’s control. In this case, a compute service is required to coordinate action, validate data, and enforce security.
The other important point to consider is consistency. If the front end is responsible for writing to multiple services and fails midway through, it can leave the system in an inconsistent state. In this scenario, a Lambda function should be used because it can be designed to gracefully handle errors and retry failed operations. Atomicity and consistency are much easier to enforce and control in a Lambda function than in the front end.
Embrace Third-party Services
Third-party services are welcome to join the show if they can provide value and reduce custom code. There are a lot of services that developers can leverage these days from Auth0 for authentication to Stripe or Braintree for payment processing. As long as factors such as price, capability, and availability are considered, developers should try to adopt third-party services. It is far more useful for a developer to spend time solving a problem unique to his or her domain than re-creating functionality already implemented by someone else. Don’t build for the sake of building if viable third-party services and APIs are available. Stand on the shoulders of giants to reach new heights.
Cloud has been and continues to be a game changer for IT infrastructure and software development. Software developers need to think about the way they can maximize use from cloud platforms to gain a competitive advantage.
Serverless architectures are the latest advance for developers and organizations to think about, study, and adopt. It is an exciting new shift in architecture that will grow quickly as developers embrace compute services such as AWS Lambda. Today there are serverless applications that support thousands of users and carry out complex operations including heavy-duty tasks such as video encoding and data processing. In many cases, serverless architectures can achieve a better outcome than traditional models and are cheaper, and faster to implement.
There is also a need to reduce complexity and costs associated with running infrastructure and carrying out development of traditional software systems. The reduction in cost and time spent on infrastructure maintenance, and the benefits of scalability are good reasons for organizations and developers to consider serverless architectures. It is likely that the push for serverless back ends will accelerate over the coming years.
Published at DZone with permission of Peter Sbarski . See the original article here.
Opinions expressed by DZone contributors are their own.