Top 3 Considerations for Building A Cloud Native Foundation in Your Enterprise
Top 3 Considerations for Building A Cloud Native Foundation in Your Enterprise
With the constant innovation of cloud computing, businesses should be looking toward creating cloud-native infrastructures.
Join the DZone community and get the full member experience.Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
The key business imperative driving moves to new data center architectures are their ability to natively support digital applications. Digital applications are "Cloud Native" (CN) in the sense that these interactive applications are originally written for cloud-based IaaS deployments. With Kubernetes and serverless computing, there now exist industrial-scale alternatives to simply porting over monolithic applications to the Cloud. Thus, Cloud Native application development is emerging as the most important trend in digital platforms and one that determines enterprise competitiveness. This blog post will identify the four key considerations of embarking on an enterprise CN strategy.
Every Enterprise Needs a Cloud Native Strategy
Cloud Native applications need to be architected, designed, developed, packaged, delivered, and managed based on a deep understanding of the frameworks of cloud computing. The application itself is designed for scalability, resiliency, and incremental enhancement abilities from the get-go. Depending on the application, supporting tenets include IaaS deployment and management and container orchestration. These applications need to support the development of and incremental enhancements using agile principles. The fundamental truth is that not only will this change how your infrastructure is provisioned & deployed, but also how it is managed.
An illustration of the Cloud Native Stack as of 2018 is shown below. The most important projects emerging out of this are the container orchestration platform Kubernetes, and a hybrid cloud infrastructure. No matter which IaaS provider one picks, Kkubernetes can be the single standard that applications can be developed for.
Perform an Enterprise-Wide Application Portfolio Rationalization Assessment
One of the key things I recommend organizations do at the onset of their cloud journey is to perform either an enterprise-wide or key department-wise assessment of both their application landscape and their current strategic initiatives. It is very important to understand which of these applications across departments can benefit from a cloud-based development and delivery model based on business requirements. The move to a cloud is dictated by quantitative factors such as economics (such as infrastructure costs, developer/admin training/interoperability costs); return on investment (ROI); the number of years/quarters passed before break even; and qualitative factors, like the tolerance of the business for short-term pain and the need for the enterprise to catch up with and disarm competition. It may also very useful to combine this analysis with existing IT vendor investments across a full global infrastructure footprint so that a holistic picture of the risk/rewards continuum be built. One also needs to take into account whether combining planned cloud spending can somehow be incorporated into existing legacy modernization/re-platforming projects or data-center consolidation projects.
Another important thing to consider is that public cloud spending is sometimes misleading to estimate in terms of cost. Once lines of businesses in large organizations start using public clouds, the financial promise of zero CapEx is outweighed as OpEx costs begin to run amok. In a lot of these cases, a private cloud powered by commodity open source platforms such as OpenStack may be the right way to begin. To counter the complexity of OpenStack, it may be a step in the right direction to consider a SaaS-managed OpenStack control plane so that risk is minimized in terms of both the operator and developer experience. This is a key theme that will be expanded on in later posts.
Let us be clear that not every enterprise application is a candidate for cloud migration. Given that a monolithic departmental application runs on a legacy virtual machine, what are the ideal criteria to make this decision of when to migrate it over?
At a very high level, I recommend that those legacy applications that serve a limited community of interest that isn't anticipated to grow much or result in frequent changes to the concerned suite of applications. These legacy applications can be made resident in a private cloud leveraging OpenStack.
They can then be incrementally enhanced over time (starting with changes around their provisioning, development, management, etc.) to take advantage of a private cloud design until such time that business needs dictate that they can be migrated over to a true CN development model.
Enterprise CIOs also need to ensure that their investments in the cloud don't result in a significant container or VM sprawl, which will add to the compounding of the technical debt challenge.
Consideration #1: Adopt Hybrid Cloud
As discussed above, a range of cloud choices exist, namely:
- The public cloud providers — Amazon AWS, Microsoft Azure & Google Cloud Platform
- Open private cloud platforms such as OpenStack
- Proprietary Cloud or legacy virtualization approaches — VMWare, Xen, etc.
- Converged hardware infrastructure
- Enterprise cloud services such as IBM, Oracle, etc.
- SaaS platforms such as Salesforce, Workday, etc.
When you combine the above notion with the complex vendor landscape out there, a few important truths emerge:
- The Enterprise Cloud will be hybrid, no question. However, one needs to pick and stick with a unified set of standards for development.
- Workloads will be placed on different providers based on business and cost considerations. Examples include flexibility, advantages of the application frameworks, and data services provided by the cloud vendor.
- IaaS lock-in makes zero business sense from both a business and technology perspective. The usage of a SaaS-based management plane that supports multiple cloud providers. Managed Kubernetes and Open Source Serverless Computing technology should help in avoiding lock-in as much as possible.
- Multi-cloud management is a challenge your cloud admins need to deal with and something executives need to account for in the entire business case — economics, value realization, headcount planning, etc.
Consideration #2: Adopt Kubernetes
It may seem odd to find so much of mention of a software platform in a blog about enterprise cloud but Kubernetes is a very special project and perhaps the most transformational cloud technology. Across all the above cloud provider choices, containers are unquestionably the granular unit of application development and deployment. Kubernetes is the de facto standard in container orchestration across multiple cloud providers. As far as technology goes, this is a sure thing to bet on and one you can't go wrong with.
With its focus on grouping containers together into logical units called pods, Kubernetes (Kubernetes) enables lightweight deployment of microservice-based multi-tier applications.
Kubernetes provides auto-scaling (both up and down) to accommodate usage spikes. It also provides load balancing to ensure that usage across hosts is evenly balanced. The controller supports rolling updates/canary deployments to ensure that applications can be seamlessly and incrementally upgraded. The service abstraction then gives a set of logical pods an external facing IP address. A service can be discovered by other services as well as scaled and load balanced independently. Labels (key, value) pairs can be attached to any of the above resources. Kubernetes is designed for both stateless and stateful app as it supports mounting both ephemeral as well as persistent storage volumes.
Developers and operations can dictate whether the application works on a single container or a group of containers without any impact to the application.
These straightforward concepts enable a range of architectures from the legacy stateful to the microservices to IoT land — data-intensive applications and serverless apps — to be built on Kubernetes.
However, with Kubernetes being operationally complex to deploy, manage and maintain, it makes a lot of sense to consider a SaaS-managed control plane as a solution so that Kubernetes installation, troubleshooting, deployment management, upgrades and management, and monitoring do not end up causing significant business disruption and result in personnel cost increases.
Consideration #3: From Monoliths to Microservices to Serverless
The vast majority of applications being developed now are systems of engagement being directly used by customers. These apps support a high degree of interactivity and rate of change to the application based on the data gathered using millions of micro customer interactions. All of this results in a high degree of velocity from a development standpoint. Monolithic architectural styles are no longer a fit for digital platforms as discussed below.
It is no surprise then that Cloud Native apps need a range of architectural style to accommodate this discrete nature of business functionality and change. Accordingly, most enterprise apps need to consider approaches ranging from microservices to serverless architectures. Microservices apps are broken down into smaller business services and then deployed, maintained, and managed separately. Typically each service can be run in its own process. The promise of this style is greater flexibility for development teams, higher release velocity as the whole app doesn't need to be changed to accommodate changes in smaller units and scalability.
In addition, frameworks that support microservices provide functionality such as load balancing, discovery, high availability, and flexibility in upgrades (blue/green deployments, rollbacks/roll forward etc). The more cutting-edge cousin of microservices is serverless architectures. Especially around domains such as IoT/edge computing where architecture needs to support streaming data. Each of the serverless functions can be deployed into a Docker container which is instantiated when invoked and destroyed when idle. Serverless architectures and frameworks can dramatically reduce the time spent on building up the infrastructure for container driven applications. They reduce business time to value by eliminating a lot of operational steps involved in packaging, deploying and managing infrastructure around development pipelines.
Published at DZone with permission of Vamsi Chemitiganti , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.