Which Serverless Platform Should You Use?
Which Serverless Platform Should You Use?
With the variety of serverless platforms now available, check out this article for a comparison of popular platforms and important considerations to keep in mind.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
Important Functional and Monitoring Considerations When Selecting a Serverless Platform
We previously discussed core serverless concepts in an Intro to Serverless Computing. Here, we'll discuss important considerations of the myriad of serverless platforms available today, split between proprietary cloud providers and self-hosted open source solutions.
OpenFaaS, Kubeless, Fn, OpenWhisk and numerous others; it's a hot topic at the moment. Most of the open source offerings will run on Kubernetes. Therefore, they could run on Kubernetes as a Service (KaaS) in the cloud or on your internal Kubernetes cluster if you need to keep it in-house. Here's a fun question: Is running a serverless platform on your own servers an oxymoron?
All of these open source projects are still in their early days. None have released a version 1.0 yet, and there is currently no clear indication as to which one will be the most popular.
Runtime support for these open source platforms is broad with a wide range of popular languages included along with the ability to build custom runtimes. Each function is typically deployed as a Docker container. As long as that container meets the interface requirements, it will run. Serverless COBOL functions, anyone?
With all these serverless platforms, observability is vital as they are adding another layer of complexity on top of Kubernetes, an already complex platform. The smooth operation of both the serverless platform and Kubernetes is imperative to the smooth operation of the hosted functions. Some of these projects have already thought about observability and provide a Prometheus metrics endpoint. Fn has also included Open Tracing implementations for Zipkin and Jaeger.
The usual suspects are present: Amazon Web Services Lambda, Google Cloud Functions, Microsoft Azure Function Apps and recently IBM has entered the space with a hosted version of OpenWhisk. Lambda from Amazon Web Services (AWS) has been around the longest and is the most mature offering; it is already running significant parts of Amazon's Alexa service.
All these hosted offerings provide the same basic functionality of functions hosted in the cloud that cost nothing when they are not used and are then billed by the microsecond as they are executed. All platforms provide a web user interface and CLI tools to manage the functions. Triggering of the functions can be plumbed into the cloud platform's other services, AWS has the richest set available.
All the platforms provide basic monitoring and log aggregation facilities. AWS Lambda is the leader in observability with X-Ray which provides end to end tracing across various AWS services. Google's Stackdriver tracing ability is currently only available as a preview release and does not yet support automatic tracing for serverless functions. Microsoft Azure and IBM OpenWhisk do not offer any tracing capability.
Operating Heterogeneous Services
With such a wide choice of serverless platforms to choose from, the question is which one is best suited to your needs? The good news is that you don't have to make a choice. The Serverless project provides both common tooling for managing the functions and an Event Gateway for mapping events to functions.
Using one definition file and one command line tool it is possible to deploy serverless functions to many providers in any language runtime supported by those providers. This level of automation makes moving functions from one provider to another less painful. However, functions are not truly portable as there is currently not any standard for function entry points, returning data or for the libraries that will be available at runtime.
While each cloud provider has their own API Gateway they do not typically provide much convenience for multiple provider solutions nor ease of portability. The Serverless Event Gateway provides a vendor agnostic solution offered both as a service and as a Docker image to be run where you want. As this API Gateway is not tied to any vendor it is possible to receive events from any provider or external source and route to any other provider or external destination.
Utilising a third party gateway enables the swapping out of serverless endpoints with minimum configuration.
Serverless Gateway flow.
For example the client calls the Event Gateway via HTTP, the event is initially routed to AWS Lambda and processed. With a simple change of configuration, the same client call could be routed to Google Cloud Functions to be processed; the client client would not need to be reconfigured.
The Future for Serverless
It is still a wild frontier out there with many offerings and no real standards. Increasing the fragmentation of applications into discrete functions does offer advantages for CI/CD and compute resource efficiency but at the cost of greater complexity and the risk of being tied to a platform.
With the open source offerings still very early in their development, the reliability is not up to production standards. For example, I tried to deploy some of the projects to Google Kubernetes Engine using their supplied helm files and only one successfully deployed.
The ability to observe the performance of both the serverless framework and the functions it is running is essential for production environments. The leader of the commercial offerings is Amazon with CloudWatch and X-Ray. For open source, the leader is Fn as it already includes both Prometheus metrics and Jaeger/Zipkin tracing.
Deploying an open source serverless platform to Kubernetes creates a number of Deployment, Pod and Container components.
The above example shows OpenFaaS with one function hosted. The current implementation technique of most of the open source platforms is to use a separate Docker image for each function, resulting in a separate Deployment on Kubernetes.
Serverless function container.
With Instana's support for Kubernetes cluster monitoring, all these Deployments are automatically detected and monitored. As standards for tracing through these platforms evolves, Instana will adopt them to provide fully automatic distributed tracing.
Serverless is still very much in its infancy and Instana is watching its first faltering steps with interest.
Published at DZone with permission of Steve Waterworth , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.