A Look at Serverless Architectures
A Look at Serverless Architectures
What is a "serverless architecture" and why would you use one? Read on to find out.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
One of the latest architectural styles to appear on the internet is that of Serverless Architectures, which allow teams to get rid of the traditional server engine that sits behind their web and mobile applications. Mike Roberts has been working with teams that have using this approach and has started writing an evolving article to explain this style and how to use it. He begins with describing what "serverless" means — with a couple of examples of how more usual designs become serverless.
Like many trends in software, there's no one clear view of what "Serverless" is, and that isn't helped by it really coming to mean two different but overlapping areas:
- Serverless was first used to describe applications that significantly or fully depend on third-party applications/services ("in the cloud") to manage server-side logic and state. These are typically "rich client" applications (think single page web apps, or mobile apps) that use the vast ecosystem of cloud accessible databases (like Parse, Firebase), authentication services (Auth0, AWS Cognito), etc. These types of services have been previously described as "(Mobile) Backend as a Service," and I'll be using "BaaS" as a shorthand in the rest of this article.
- Serverless can also mean applications where some amount of server-side logic is still written by the application developer but, unlike traditional architectures, is run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a third-party. (Thanks to ThoughtWorks for their definition in their most recent Tech Radar.) One way to think of this is "Functions as a service/FaaS." AWS Lambda is one of the most popular implementations of FaaS at present, but there are others. I'll be using "FaaS" as a shorthand for this meaning of serverless throughout the rest of this article.
Mostly I'm going to talk about the second of these areas because it is the one that is newer, has significant differences to how we typically think about technical architecture, and has been driving a lot of the hype around serverless.
However, these concepts are related and, in fact, converging. A good example is Auth0 — they started initially with BaaS "Authentication as a Service," but with Auth0 Webtask, they are entering the FaaS space.
Furthermore, in many cases, when developing a "BaaS-shaped" application, especially when developing a "rich" web-based app as opposed to a mobile app, you'll likely still need some amount of custom server-side functionality. FaaS functions may be a good solution for this, especially if they are integrated to some extent with the BaaS services you’re using. Examples of such functionality include data validation (protecting against imposter clients) and compute-intensive processing (e.g. image or video manipulation.)
A Couple of Examples
Let’s think about a traditional 3-tier client-oriented system with server-side logic. A good example is a typical eCommerce app (dare I say an online pet store?)
With this architecture, the client can be relatively unintelligent, with much of the logic in the system — authentication, page navigation, searching, transactions — implemented by the server application.
With a serverless architecture, this may end up looking more like this:
This is a massively simplified view, but even with this there are a number of significant changes that have happened here. Please note this is not a recommendation of an architectural migration, I’m merely using this as a tool to expose some Serverless concepts!
- We’ve deleted the authentication logic in the original application and have replaced it with a third-party BaaS service.
- Using another example of BaaS, we’ve allowed the client direct access to a subset of our database (for product listings), which itself is fully third-party hosted (e.g. AWS Dynamo.) We likely have a different security profile for the client accessing the database in this way from any server resources that may access the database.
- These previous two points imply a very important third — some logic that was in the Pet Store server is now within the client, e.g. keeping track of a user session, understanding the UX structure of the application (e.g. page navigation), reading from a database and translating that into a usable view, etc. The client is in fact well on its way to becoming a Single Page Application.
- Some UX related functionality we may want to keep in the server, e.g. if it's compute intensive or requires access to significant amounts of data. An example here is "search." For the search feature, instead of having an always-running server we can implement a FaaS function that responds to HTTP requests via an API Gateway (described later.) We can have both the clientand the server function read from the same database for product data.
- Because the original server was implemented in Java, and AWS Lambda (our FaaS vendor of choice in this instance) supports functions implemented in Java, we can port the search code from the Pet Store server to the Pet Store Search function without a complete re-write.
- Finally, we may replace our "purchase" functionality with another FaaS function, choosing to keep it on the server side for security reasons, rather than re-implement it in the client. It, too, is fronted by API Gateway.
A different example is a backend data-processing service. Say you're writing a user-centric application that needs to quickly respond to UI requests, but secondarily you want to capture all the different types of activity that are occurring. Let's think about an online ad system — when a user clicks on an advertisement, you want to very quickly redirect them to the target of the ad, but at the same time, you need to collect the fact that the click has happened so that you can charge the advertiser.
Traditionally, the architecture may look like this. The "Ad Server" synchronously responds to the user — we don’t care about that interaction for the sake of this example — but it also posts a message to a channel that can be asynchronously processed by a "click processor" application that updates a database, e.g. to decrement the advertiser’s budget.
In the serverless world this looks like:
There’s a much smaller difference to the architecture here compared to our first example. We’ve replaced a long-lived consumer application with a FaaS function that runs within the event driven context the vendor provides us. Note that the vendor supplies both the Message Broker and the FaaS environment — the two systems are closely tied to each other.
The FaaS environment may also process several clicks in parallel by instantiating multiple copies of the function code — depending on how we'd written the original process this may be a new concept we need to consider.
Unpacking "Function as a Service"
We've mentioned the FaaS idea a lot already, but it's time to dig into what it really means. To do this let's look at the opening description for Amazon's Lambda product. I've added some tokens to it, which I then expand upon.
- Fundamentally FaaS is about running back-end code without managing your own server systems or your own server applications. That second clause — server applications — is a key difference when comparing with other modern architectural trends like containers and PaaS (Platform as a Service.)
- If we go back to our click processing example from earlier, what FaaS does is replace the click processing server (possibly a physical machine, but definitely a specific application) with something that doesn’t need a provisioned server, nor an application that is running all the time.
- Let’s consider our click processing example again — the only code that needs to change when moving to FaaS is the "main method / startup" code, in that it is deleted, and likely the specific code that is the top-level message handler (the "message listener interface" implementation), but this might only be a change in method signature. All of the rest of the code (e.g. the code that writes to the database) is no different in a FaaS world.
- Since we have no server applications to run deployment is very different to traditional systems — we upload the code to the FaaS provider and it does everything else. Right now that typically means uploading a new definition of the code (e.g. in a zip or JAR file), and then calling a proprietary API to initiate the update.
- Horizontal scaling is completely automatic, elastic, and managed by the provider. If your system needs to be processing 100 requests in parallel the provider will handle that without any extra configuration on your part. The "compute containers" executing your functions are ephemeral with the FaaS provider provisioning and destroying them purely driven by runtime need.
- Let's return to our click processor. Say that we were having a good day and customers were clicking on 10 times as many ads as usual. Would our click processing application be able to handle this? For example, did we code to be able to handle multiple messages at a time? Even if we did, would one running instance of the application be enough to process the load? If we are able to run multiple processes, is auto-scaling automatic or do we need to reconfigure that manually? With FaaS, you need to write the function ahead of time to assume parallelism, but from that point on the FaaS provider automatically handles all scaling needs.
- Functions in FaaS are triggered by event types defined by the provider. With Amazon AWS such stimuli include S3 (file) updates, time (scheduled tasks), and messages added to a message bus (e.g. Kinesis). Your function will typically have to provide parameters specific to the event source it is tied to. With the click processor, we made an assumption that we were already using a FaaS-supported message broker. If not, we would have needed to switch to one, and that would have required making changes to the message producer too.
- Most providers also allow functions to be triggered as a response to inbound HTTP requests, typically in some kind of API gateway. (e.g. AWS API Gateway, Webtask). We used this in our Pet Store example for our "search" and "purchase" functions.
FaaS functions have significant restrictions when it comes to local (machine/ nstance bound) state. In short, you should assume that for any given invocation of a function, none of the in-process or host states that you create will be available to any subsequent invocation. This includes state in RAM and state you may write to local disk. In other words, from a deployment-unit point of view, FaaS functions are stateless.
This has a huge impact on application architecture, albeit not a unique one — the "Twelve-Factor App" concept has precisely the same restriction.
Given this restriction, what are alternatives? Typically it means that FaaS functions are either naturally stateless — i.e. they provide pure functional transformations of their input — or that they make use of a database, a cross-application cache (e.g. Redis), or network file store (e.g. S3) to store state across requests or for further input to handle a request.
FaaS functions are typically limited in how long each invocation is allowed to run. At present AWS Lambda functions are not allowed to run for longer than 5 minutes, and if they do, they will be terminated.
This means that certain classes of long-lived task are not suited to FaaS functions without re-architecture, e.g. you may need to create several different coordinated FaaS functions where in a traditional environment you may have one long duration task performing both coordination and execution.
At present, how long it takes your FaaS function to respond to a request depends on a large number of factors, and may be anywhere from 10ms to 2 minutes. That sounds bad, but let's get a little more specific, using AWS Lambda as an example.
If your Lambda function is implemented on the JVM, you may occasionally see long response times (e.g. > 10 seconds) while the JVM is spun up. However this only notably happens with either of the following scenarios:
- Your function processes events infrequently, on the order of longer than 10 minutes between invocations.
- You have very sudden spikes in traffic, for instance, you typically process 10 requests per second but this ramps up to 100 requests per second in less than 10 seconds.
The former of these may be avoided in certain situations by the ugly hack of pinging your function every 5 minutes to keep it alive.
Are these issues a concern? It depends on the style and traffic shape of your application. My former team has an asynchronous message-processing Lambda app implemented in Java which processes hundreds of millions of messages/day, and they have no concerns with startup latency. That said, if you were writing a low-latency trading application you probably wouldn’t want to use FaaS systems at this time, no matter the language you were using for implementation.
Whether or not you think your app may have problems like this, you should test with production-like load to see what performance you see. If your use case doesn't work now, you may want to try again in a few months time since this is a major area of development by FaaS vendors.
One aspect of FaaS that we brushed upon earlier is an "API Gateway." An API Gateway is an HTTP server where routes/endpoints are defined in configuration and each route is associated with a FaaS function. When an API Gateway receives a request, it finds the routing configuration matching the request and then calls the relevant FaaS function. Typically, the API Gateway will allow mapping from HTTP request parameters to inputs arguments for the FaaS function. The API Gateway transforms the result of the FaaS function call to an HTTP response and returns this to the original caller.
Amazon Web Services have their own API Gateway and other vendors offer similar abilities.
Beyond purely routing requests, API Gateways may also perform authentication, input validation, response code mapping, etc. Your spidey-sense may be buzzing about whether this is actually such a good idea, if so hold that thought -—we'll consider this further later.
One use case for API Gateway + FaaS is for creating HTTP-fronted microservices in a serverless way with all the scaling, management and other benefits that come from FaaS functions.
At present, tooling for API gateways is achingly immature and so while defining applications with API gateways is possible, it's most definitely not for the faint-hearted.
The comment above about API Gateway tooling being immature actually applies, on the whole, to Serverless FaaS in general. There are exceptions, however — one example is Auth0 Webtask, which places significant priority on Developer UX in its tooling. Tomasz Janczuk gave a very good demonstration of this at the recent Serverless Conference.
Debugging and monitoring are tricky in general in serverless apps — we’ll get into this further in subsequent installments of this article.
One of the main benefits of Serverless FaaS applications is transparent production runtime provisioning, and so open source is not currently as relevant in this world as it is for, say, Docker and containers. In the future, we may see a popular FaaS/API Gateway platform implementation that will run "on premise" or on a developer workstation. IBM’s OpenWhisk is an example of such an implementation, and it will be interesting to see whether this, or an alternative implementation, picks up adoption.
Another example is Apex — a project to "Build, deploy, and manage AWS Lambda functions with ease." One particularly interesting aspect of Apex is that it allows you to develop Lambda functions in languages other than those directly supported by Amazon, e.g. Go.
What Isn’t Serverless?
So far in this article I've defined "serverless" to mean the union of a couple of other ideas — "Backend as a Service" and "Functions as a Service." I've also dug into the capabilities of the second of these.
Before we start looking at the very important area of benefits and drawbacks, I'd like to spend one more moment on definition, or at least defining what serverless isn't. I’ve seen some people (including me in the recent past) get confused about these things, and I think it's worth discussing them for clarity's sake.
Comparison with PaaS
Given that serverless FaaS functions are very similar to 12-Factor applications, are they in fact just another form of "Platform as a Service" (PaaS) like Heroku? For a brief answer, I refer to Adrian Cockcroft:
In other words, most PaaS applications are not geared towards bringing entire applications up and down for every request, whereas FaaS platforms do exactly this.
OK, but so what, if I'm being a good 12-Factor App developer, there's still no difference to how I code? That's true, but there is a big difference to how you operate your app. Because we're all good DevOps-savvy engineers, we're thinking about operations as much as we are about development, right?
The key operational difference between FaaS and PaaS is scaling. With most PaaSs you still need to think about scale, e.g. with Heroku how many Dynos you want to run. With a FaaS application, this is completely transparent. Even if you set up your PaaS application to auto-scale, you won't be doing this to the level of individual requests (unless you have a very specifically shaped traffic profile), and so a FaaS application is much more efficient when it comes to costs.
Given this benefit, why would you still use a PaaS? There are several reasons, but tooling and maturity of API gateways are probably the biggest. Furthermore, 12-Factor Apps implemented in a PaaS may use an in-app readonly cache for optimization, which isn't an option for FaaS functions.
Serverless doesn't mean "No Ops." It might mean "no internal sys admin," depending on how far down the serverless rabbit hole you go. There are two important things to consider here.
Firstly "Ops" means a lot more than server administration. It also means at least monitoring, deployment, security, networking, and often also means some amount of production debugging and system scaling. These problems all still exist with serverless apps, and you're still going to need a strategy to deal with them. In some ways, ops is harder in a serverless world because a lot of this is so new.
Second even the sys admin is still happening — you're just outsourcing it with serverless. That's not necessarily a bad thing — we outsource a lot. But depending on what precisely you're trying to do, this might be a good or a bad thing, and either way, at some point the abstraction will likely leak and you'll need to know that human sys admins somewhere are supporting your application.
Stored Procedures as a Service
Another theme I've seen is that Serverless FaaS is "Stored Procedures as a Service." I think that's come from the fact that many examples of FaaS functions (including some I've used in this article) are small pieces of code that wrap access to a database. If that's all we could use FaaS for I think the name would be useful, but because it is really just a subset of FaaS's capability, then thinking of FaaS in such a way is an invalid constraint.
That being said it is worth considering whether FaaS comes with some of the same problems of stored procedures, including the technical debt concern Camille mentions in the referenced tweet. There are many lessons that come from using stored procs that are worth reviewing in the context of FaaS and seeing whether they apply. Some of these are that stored procedures:
- Often require vendor-specific language, or at least vendor-specific frameworks/extensions to a language.
- Are hard to test because they need to be executed in the context of a database.
- Are tricky to version control/treat as a first class application.
Note that not all of these may apply to all implementations of stored procs, but they're certainly problems that I've come across in my time. Let’s see if they might apply to FaaS:
(1) is definitely not a concern for the FaaS implementations I’ve seen so far, so we can scrub that one off the list right away.
For (2), because we're dealing with "just code," unit testing is definitely just as easy as any other code. Integration testing is a different (and legitimate) question though which we'll discuss later.
For (3), again since FaaS functions are ‘just code’ version control is OK. But as for application packaging, there are no mature patterns on this yet. The sServerless framework I mentioned earlier does provide its own form of this, and AWS announced at the recent Serverless Conference in May 2016 that they are working on something for packaging also ("Flourish"), but for now this is another legitimate concern.
This is an evolving publication, and I shall be extending it over the coming days and weeks to cover more topics on serverless architecture including the benefits and drawbacks of this approach, and where we might see serverless evolve over the next year or two.
Published at DZone with permission of Mike Roberts . See the original article here.
Opinions expressed by DZone contributors are their own.