Hybrid Cloud: Cloud Rolls Out To Data Centers in Different Hues
The needs of businesses for security, disaster recovery, and offline execution have made hybrid cloud computing a popular cloud option.
Join the DZone community and get the full member experience.Join For Free
The term "hybrid cloud" in popular vocabulary represents a topology in which an organization's IT infrastructure is spread over public cloud(s) and on-premise data centers. An on-premise datacenter can include the enterprise's own data center or any colocation facility used by the enterprise. Hybrid has also lately been extended to include edge locations whether in a device or in a telecom provider's location. These variants are sometimes also referred to as "private cloud."
Although in a utopian world, the complete data center can be placed in the cloud, in reality, there are invariably some use cases that require workloads to be running on-premise. This is especially true of large enterprises that have considerable IT assets many of which need to continue to reside in the private cloud for various reasons.
Responding to this need, the three major cloud service providers (CSPs) — AWS, Azure and GCP — have released a number of products to enable hybrid cloud computing over the last three to four years. In this post we will look at the positioning of these products in the cloud portfolio of the major vendors along with some common use cases.
You may also enjoy: From Private to Hybrid to Multi-Cloud
Hybrid Use Cases
Cloud busting is a deployment topology in which the regular traffic is directed to an on-premise deployment by a load balancer. With increasing load, new instances are spun up in the public cloud after the traffic crosses a particular threshold and additional traffic is directed there.
This model is primarily used for cost optimization. A common scenario is to provision additional infrastructure in the public cloud to handle seasonal spikes and scale back or dismantle the same after the traffic returns to normal. This often turns out to be a cheaper option compared to maintaining the same infrastructure on-premise which remains unused during relatively longer periods of regular traffic.
A similar scenario is to use public cloud storage for archival purposes or when the storage requirements exceed the on-premise storage threshold.
Disaster Recovery (DR) and Failover to Public Cloud
Systems running in organizations' data centers experience unplanned downtimes due to various reasons often causing loss to their business. To mitigate this they plan different levels of disaster recovery strategies depending on the criticality of the system/application. Setting up a disaster recovery site requires building and operating an offsite data center with its associated costs, which often looks like unnecessary overhead. Hybrid cloud gives us the flexibility of operating the disaster recovery infrastructure in public cloud. Both active/passive or active/active strategies can be implemented by leveraging the public cloud.
Organizations often have data governance/residency requirements like PCI-DSS in the cards industry and HIPAA in the insurance sector. These requirements usually prohibit storage of certain category of data in public cloud or beyond geographic boundaries. Hybrid topology enables the organization to get around this restriction by storing the sensitive data on-premise which is sent to the public cloud for storage after applying suitable encryption or aliasing.
Edge computing is the domain of processing data closer to the edge rather than in a central location. There are workloads which require single-digit millisecond latency or even lower. Such workloads need to be placed on-premise at different edge layers to offload some of the processing to the edge and process the traditional workloads in the public cloud.
Some workloads need to operate in remote locations without or with intermittent access to internet and sync with the central servers after entering the range of cellular networks. Use cases for offline execution include ocean liners, IoT devices, cargo carriers.
Hybrid Cloud Deployment Models
Extend the Public Cloud Network to On-Premise
Network connectivity between cloud provider network and on-premise datacenter can be set up in two different ways:
- Connect over the internet using VPN
- Connect over a dedicated connection between the on-premises datacenter and the cloud provider's network. To achieve suitable scalability within cost constraints, the cloud provider collaborates with local facility providers to offer points of presence (PoPs) for network route terminations. End customers then need to buy the services of these PoP providers, who will then establish the last mile connectivity with the organization's datacenter.
This allows servers in the on-premise locations to access the public cloud resources and vice versa by private IPs. Once the connectivity is established, applications, compute, and storage resources in public cloud and on-premise data center can be integrated for seamless operation.
Automate Service Management Across Environments
A unified user interface is desired to manage the IT resources spread over public cloud and on-premise locations in order to ensure consistency of processes, configuration and easy interoperability.
Microsoft announced Azure Arc as part of its Hybrid 2.0 offering in the recent Microsoft Ignite Conference. Azure Arc extends the Azure control plane for managing the on-premise resources and resources from other cloud providers like AWS and GCP from a single pane of glass. This will allow even legacy VMs residing in on-premise locations to be accessible from Azure.
AWS already had a similar offering called Systems Manager for managing resources in public cloud and on-premise locations. However this does not extend to other clouds.
Google recently announced Migrate for Anthos which can containerize the workload running on-premise and put those in the public cloud.
Cloud Infrastructure at On-Premise Data Center
In this model, the cloud services run on hardware installed in the on-premise data center. All three CSPs have released products in this category in different variants.
Azure Stack (recently rechristened Azure Stack Hub): Microsoft released Azure Stack in 2016. The provisioning process is much more involved compared to AWS Outposts. Organizations need to enter into separate contracts with Microsoft and a service provider who supplies the hardware. The hardware can be installed at the location owned by the Organization or the solution provider. The solution provider ships the hardware and sends personnel to install the software and make Azure Stack hub available for consumption.
AWS Outposts: AWS Outposts is a converged infrastructure rack installed on-premise. Unlike Azure stack, Outposts is a fully managed service (including the hardware). Ordering Outposts is much easier and can be done from the AWS console. Outposts are connected to a region. After installation, Outposts appears as a local zone similar to an Availability Zone. We can create subnets in the local zone and create resources for workloads that need to run in on-premise locations.
Google Anthos: In contrast to AWS Outposts and Azure Stack, Anthos is a pure software solution which promises to provide a consistent experience to developers and operations team with the capability to be run on premises as well as on other cloud platforms.
Cloud Infrastructure at 5G Edge
The advent of 5G is expected to see an exponential growth of connected devices leading to a surge of cellular traffic and demand for sub-millisecond response times for many use cases like automated cars, AR/VR, industrial IoT, etc. Network traffic originating at the edge traverses through the cell tower to telecom provider's network and finally to the CSP's network, which increases latency due to the additional hops. This is sought to be overcome by processing the request at the infrastructure placed in the telecom provider's network and eliminating the additional hops to the CSP's network. This model is roughly similar to the model described in the previous section except in this case the infrastructure is deployed in the telecom provider's premises.
AWS launched AWS Wavelength in the latest re:Invent to give customers a choice of processing workloads demanding ultra low latency. The service is available based on AWS's partnerships with 5G telecom providers — partnered with Verizon, Vodafone in Europe, SK Telecom in South Korea and KDDI in Japan
Microsoft's Network Edge Compute (NEC) is a similar product in this category that processes workloads by infrastructure placed in the telecom provider's network.
For a more elaborate explanation please read my previous article 5G.
The primary driver for the hybrid cloud model is invariably the business need rather than technology and CSPs are catering to this requirement with a series of services thereby giving the customer more choice and flexibility.
The CSPs have positioned their offerings differently.
Microsoft Azure unveiled its hybrid strategy 2.0 which encompasses Azure Arc for manageability along with the three variants of Azure stack including Azure Stack edge and Azure Stack HCI.
AWS has kept the Edge services separate from the hybrid category. AWS earlier had hybrid support in patches. AWS started with shipping snowball appliances to the customer's data center, provided the Storage gateway as an extension for data center storage and the system manager for enabling management of both on-premise and cloud workloads.However a comprehensive hybrid platform was missing from the AWS stable. This gap was filled with AWS Outposts introduced in re:Invent 2019.
Google has adopted a open model for Anthos with the capability of running on-premise and on other clouds.
Given the plethora of available choices, customers need to make careful evaluation before formulating their hybrid strategy while keeping in mind their best interests centered on cost, easy to use and above all satisfying their use cases.
Multi-cloud and Hybrid Cloud: A World of Difference
Opinions expressed by DZone contributors are their own.