A Study of Hosting and Managing on Hybrid Multi-Cloud
This is my study of a real customer use case on GitOps, multi-cloud management system and, securing dynamic infrastructure secrets, using Red Hat’s open source technology
Join the DZone community and get the full member experience.Join For Free
More companies are strategizing to be on Hybrid cloud or even Multi-cloud, for higher flexibility, resiliency and sometimes, simply it’s too risky to put the egg in the same basket. This is a study based on real solutions using Red Hat’s open source technology. This article is an abstraction of the common generic components summarized according to the implementation. It gives you an overall idea of the flow and why we choose certain technologies to set you off the right place to begin your own journey on Hybrid Multi-cloud environments.
The idea of distributed computing is not new, by leveraging the combined processing power, memory, and storage of multiple software components on multiple machine instances to achieve better performance. The problem now is how can we scale out the deployment for these software components quickly among the clouds with the stability of actual machines. Having the freedom to bring up clusters close to the clients issuing the request and close to the data stores due to data gravity. Or sometimes they want to deploy a section of the application supporting the cognitive services they were running on the specific cloud providers.
To host platforms on multiple clouds can be difficult, as it introduces extra complexity. No matter if it’s finding the people with knowledge on all cloud vendors, securing across clouds, and governance across the board. We found the most common questions from customers are, automation, security, and uniformity. I have broken down how this study tackles the above concerns using Red Hat and its partner’s technologies into four sections:
We have logically separated it into three main areas:
Unified management Hub, which hosts the management platform to manage all clusters, a vault securing and issuing infrastructure credentials, a repository that stores the infrastructure code. And a CI/CD controller which continuously monitors and applies updates. I found many customers decided to host the hub in their own data center on top of their existing virtualization infrastructure.
Managed Clusters: These are the clusters that run the customer’s application, scaling up/down for distributed computing needs. Metrics and status are constantly synchronized back to the unified management hub. These clusters are deployed across major cloud vendors such as Azure, AWS, and Google cloud.
Bootstrap Automation: This is a temporary instance that is used for bootstrapping the unified management hub. It consists of multiple Ansible playbooks to install all the components on the hub, and also set up assigned administrative roles.
The Technology Stack
In the case study, customers have chosen several the following technologies and the reason why:
- Red Hat OpenShift Platform
- Instead of directly using and learning the offering from all vendors, or even learning the subtle differences between the Kubernetes offering, using a platform offering sits on top across data centers, private and public cloud will provide a unified way to deploy, monitor, and automate all the clusters.
- OpenShift GitOps
- Automate delivery through DevOps practices across multicluster OpenShift and Kubernetes infrastructure, with the choice of either automatically or manually synchronizing the deployment of clusters according to what’s in the repository.
- Core Monitoring
- OpenShift has a pre-configured, pre-installed, and self-updating monitoring stack that provides monitoring for core platform components. On top of that, we can also define monitoring for user-defined projects as well.
- Grafana Loki
- Horizontally scalable and better log aggregation system, and more cost-effective and easy to operate especially in a multi-cluster environment.
- External Secret
- Enable use of external secret management systems (like HashiCorp Vault in this case) to securely add secrets into the OpenShift platform.
- Red Hat Advanced Cluster Management for Kubernetes
- Controls clusters and applications from a single unified management hub console, with built-in security policies, provisioning cluster, and application lifecycles. Especially important when it comes to managing on top of multi-clouds.
- Red Hat Ansible Automation
- Used to automate the configuration and installation of the management hub.
- Hashicorp Vault
- Secure centralized store for dynamic infrastructure and application across clusters. For low trust networks between clouds and data centers.
The key to automate is “Infrastructure as code”, by versioning and storing clusters, networks, servers, data stores, or even applications as software into a centralized and controlled repository, it allows the environment to be agile, consistent, and error-prone. As the creation, updates are all pre-configured and can be applied by simply executing the code, with fewer human errors, and can be replicated across different environments.
We will start by bootstrapping the management hub. Here are the steps:
First, we need to set up the Red Hat OpenShift Platform (OpenShift) that hosts the Management Hub. By using the OpenShift installation program, it provides flexible ways to get OpenShift installed. Ansible playbook was used to kick off the installation with configurations.
Ansible playbooks are again used to deploy and configure Red Hat Advanced Cluster Management for Kubernetes (RHACM) and later other supporting components (External secret management) on top of the provisioned OpenShift cluster.
Install Vault with Ansible playbook. The vault we choose is from our partner Hashicorp, the vault is to manage secrets for all the Openshift clusters.
Ansible playbook is used again to configure and trigger the Openshift Gitops operator on the hub cluster. And deploy the Openshift Gitops instance for continuous delivery.
For identity management, we use the existing one and use it as a source for openshift groups. And later use it to authenticate users when logging into the Hub and managed clusters.
Now we have the centralized unified management hub ready to go, we are now ready to deploy the cluster on multi-cloud to serve the developers and end-users. In my next article, I will go over my study on GitOps. And simplify provisioning or updating in the complex setting.
Published at DZone with permission of Christina Lin, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.