Microservices and Serverless on Azure
Microservices and Serverless on Azure
See how Azure is diving into the serverless world with open-source software like Azure Container Instances, Service Fabric Mesh, and Azure Functions.
Join the DZone community and get the full member experience.Join For Free
“How Microsoft Is Shifting Focus to Open Source” is just one of the most recent headlines, and by now, it should not come as a surprise that Azure not only embraces open source, but also lives it. Combine the dedication for open source and open-source products with a passion for developers and you can expect very customer-centric and innovative approaches to the most recent trends in the IT industry. This article describes how Azure not only supports microservices, containers and serverless technologies, but also innovates in this area. Before we dive into the topic we need to quickly clarify the terminology as there is still quite a lot of confusion and ambiguity when it comes to understanding what microservices, containers, functions as a service (FaaS) and serverless means.
Microservices: An architectural style and as such is completely platform and technology agnostic.
Containers: Docker, the most popular container technology, defines containers as a “standardized unit of software.” For modern, cloud-native applications container images are the unit of deployment.
Serverless: From a technical perspective, serverless means that you do not need to worry about the underlying infrastructure, i.e. understand how many nodes in a cluster you need, what the machine sizes should be, how to scale the infrastructure etc. From a developer perspective, serverless adds an event-driven programming model and from an economic perspective, you only pay-per-execution (CPU time consumed).
Function as a Service (FaaS): With FaaS, the function is the unit of work, which means that your code has a start and a finish. Functions are usually triggered by events that are emitted by either other functions or platform services. Most of the time, FaaS is implemented on top of serverless infrastructures, but one can also install some FaaS runtimes on infrastructure managed by the user, i.e. Azure Functions on their own hardware.
Azure offers very comprehensive serverless computing capabilities which would be too much to discuss in one article. The rest of this article focuses on the ones that have container orchestration as part of their architecture, Azure Container Instances, Service Fabric Mesh, and Azure Functions.
Microservices and the Journey to Serverless
The microservices architectural style was really put on the map by born-in-the-cloud companies such as Netflix a couple of years ago. Some of the main drivers for designing their services in a loosely coupled way and centered around business capabilities was agility and time to market. That loose coupling came at the cost of complexity as microservices-based applications and their distributed nature require developers to understand distributed computing patterns.
In the early days, microservices-based applications ran on clusters of bare metal machines or VMs. Installing software directly on machines often resulted in errors caused by missing dependencies, runtime versioning conflicts, resource contention, and isolation. Building VMs with specific runtimes and installing the software on it was an adequate approach to the problem, but it came at the cost of slow bootup times, which impacts scale-out scenarios, and lots of disk space usage as VM images tend to get really big.
It was Docker Inc. who finally took the concept of operating system level virtualization and made containers mainstream. While providing lower isolation levels than VMs, containers provide higher ones than processes, container images are typically smaller in size than VM images and boot up way faster. Developers started to love the concept of containers as a packaging format and now pretty much every new application is using containers and more and more legacy applications are being containerized.
With microservices being packaged in container images, orchestrators started to play a more important role. While there were several choices in the beginning, Kubernetes is becoming the most popular choice today. Orchestrators, however, added another variable to the equation as development and operations teams need to understand them. Figure 1 shows a high-level topology of containerized microservices running on an orchestrator on VMs.
The management part of the environment has gotten better as pretty much every cloud vendor now offers “orchestrators” as a service. Azure, for example, offers amongst other container-based services, a fully managed Kubernetes Service (AKS) or even a fully-managed microservices platform, Azure Service Fabric. As with any cloud provider, “managed” Kubernetes or managed PaaS (Platform as a Service) services in general means that the setup and runtime part of the service is managed.
With AKS, for example, it means that, among other things, you only need to provide the number of nodes, the node sizes, the different node types and what type of network configuration you want to use. Azure will take care of setting up the cluster, updating to new Kubernetes versions and applying security patches for you. From a developer perspective, you still need to understand how Kubernetes works if you want to build microservices applications on top of it as it is still more like an infrastructure play.
For example, a Kubernetes service does not really represent the service code within a container, it just provides an endpoint to it, so that the code inside the container can always be accessed through the same endpoint.
As the briefly described journey above has outlined, moving from monolithic applications to microservices has really introduced three major hurdles to developers.
Developers need to understand distributed computing patterns
Developers need to understand how to build and work with container images and containers
Developers need to understand the basics of how orchestrators work
There is also an economic aspect of setting up and running clusters as users are typically charged for compute hours, which means you pay as long as your nodes are up and running, while your application might be sitting idle or utilizing low resources. The remainder of this article will discuss how Azure is trailblazing into the serverless world.
Microservices and Serverless
As mentioned before, it is important to understand that microservices and serverless are not analogous. One can design and develop microservices-based applications and run them on serverless infrastructure. The most critical component that has to be provided is networking, service discovery, routing and eventing services, so that the communication between the microservices can be established. Azure invests and innovates heavily in this space. Let’s look at the most obvious choices when it comes to building microservices on serverless in Azure.
Azure functions, one of the first serverless offerings from Azure allows you to run code, in multiple languages, on demand in response to a variety of events. From a microservices perspective the biggest difference between Azure functions (or any other FaaS offering), Azure Container Instances and Service Fabric Mesh is that with Azure functions, the function is the unit of work while in Azure Container Instances and Service Fabric Mesh the entire container contains the unit of work. In other words, functions are executed, they start, and finish based on triggers (usually events) while microservices in containers are being executed the entire time. Functions also have the concept of bindings, which provide a declarative way to connect data, i.e. a queuing or storage service from within your code. Figure 4 shows the high-level concept of a function.
As expected, you are not required to provision or manage infrastructure; simply pay a micro-billed premium for the value-add service. It is worth mentioning that Azure Functions is OSS and that you can also take the Azure Functions runtime and install it on your own server. This is a good example that FaaS frameworks can also run on traditional, non-serverless, environments. From a microservices perspective, you can think about functions complementing microservices scenarios.
For example, let’s assume you have a Shopping application built using microservices. In such an application an Order microservice can trigger a function notifying the customer that an order has been placed. The trigger itself is usually event based which can be handled by Azure Event Grid, a service that can be used as the glue for building event-driven microservices-based applications, whether you run it on a managed Azure Kubernetes Service, Service Fabric platforms or serverless using Azure Container Instances, Service Fabric Mesh or Azure Functions.
Service Fabric Mesh
Service Fabric Mesh is the newest serverless offering from Azure that enables you to describe and deploy an entire microservices application and its dependencies with one declarative application model and not worry about infrastructure. It is based on Service Fabric proper, the same scalable infrastructure that some of Azure’s largest internal services (such as Azure Cosmos DB) run on, except that is abstracted away from the user or in other words, the user does not need to understand Service Fabric and its concepts at all. Figure 3 shows a topology that can be described with the SF Mesh application model
Why is this such a big deal you might ask? As mentioned before, microservices-based architectures highly depend on networking and routing capabilities. SF Mesh automatically puts every service on the same network. It also adds an Envoy sidecar to each running container to enable sophisticated data-aware traffic routing. In addition, SF mesh offers high availability, scaling in/out, discoverability, orchestration, message routing, reliable messaging, no-downtime upgrades, security/secrets management, disaster recovery, state management, configuration management, and distributed transactions. From a developer and DevOps perspective, SF Mesh is a true serverless microservices platform offering that allows you to just focus on writing your code and setting up a proper CI/CD process.
Azure Container Instances (ACI)
Although Azure Kubernetes Service is a perfect platform for running containers architected as microservices, there is still the burden of ‘owning’ the Kubernetes cluster. If you want to go completely serverless and use containers, you have the option of using Azure Container Instance, the CaaS(Container As A Service)/serverless container per second billed solution from Azure. At the time of writing, ACI by itself is not suitable to be the sole deployment target for an entire microservices application due to the lack of networking support (which is supposed to ship in early September 2018).
The combination of Azure Kubernetes Service and Azure Container Instance(ACI) forms a truly powerful microservice platform. What makes it powerful is the ACI-connector, based on Azure’s open source community led project, Virtual Kubelet. The Virtual Kubelet is a great example for driving the development of truly serverless Kubernetes experiences and now has been adopted by other cloud providers as well.
The combination of the two is a perfect platform for taking advantages of both microservices and serverless. Simply use AKS for deploying microservices and for the all the burst workloads go serverless with spinning ACI’s reducing the scale management burden and taking advantage of the per second billing model. For example, an online retailer having a sale can choose to spin ACI’s to address the additional burst demand instead of predicting the additional compute resources required, scale out to provision the predicted resources and then finally scale in when the burst traffic is over. This approach also reduces the management overhead of monitoring, provisioning and deprovisioning the resources. Figure 4 shows a Kubernetes deployment spanning across AKS and ACI.
ACI can also consume events issued by other services and as such play an important role in event-driven microservices architectures.
Why Is Serverless the Future?
A few folks may remember that Azure started out with PaaS back in 2010 as Microsoft believed customers should spend their precious time with developing their applications; not needing to care about the underlying infrastructure. At this time, the market was just not there as many customers tried to figure out what that “cloud” thing was and, as a result, the adoption of PaaS was just not there. Eight years later, the entire cloud market has matured and with it the notion of how to build cloud-native applications. Serverless and microservices have replaced containers and DevOps as buzzwords. Nowadays, every cloud provides a FaaS offering, which is on the forefront of serverless computing.
One of the promises of the cloud is to offer infinite scale, better infrastructure management and better economics demanded new ways to think about how to run applications. More mature customers are not only “rediscovering” the value of PaaS, but also want to take it to a new level, where infrastructure and its necessary management does not play a role at all - this is what serverless is all about.
Bottom line is that customers, economics and technology advancements will push for serverless being the future of modern cloud-native application development. Azure offers a great home for serverless workloads, but inventions and new technologies such as Virtual Kubelet and Service Fabric Mesh will enable microservices-based applications on serverless infrastructure to become mainstream.
Opinions expressed by DZone contributors are their own.