Why Developers Should Favor a Hyperscale, Converged Infrastructure
Why Developers Should Favor a Hyperscale, Converged Infrastructure
The old approach to managing infrastructure isn't working. Learn about hyperscaling, its origins, and what it means for the future of storage and infrastructure.
Join the DZone community and get the full member experience.Join For Free
The new Gartner Critical Capabilities report explains how APIs and microservices enable digital leaders to deliver better B2B, open banking and mobile projects.
Stop me if you've heard this one from your favorite neighborhood tech vendor..."We take the same techniques that hyperscalers like Google, Amazon, and Facebook use, and package it up for enterprises like you."
Despite this hyperscale hyperbole, there is a lot of merit to the concept. Democratizing technologies and techniques employed by some of the web giants is the only economical path forward for enterprises grappling with digital transformation and ever-growing data.
Hedvig's founder, Avinash Lakshman, drives this home when he talks about his Facebook days. He recalls how, after a year of developing Cassandra, he bootstrapped it one weekend with 65 million users and grew the cluster to nearly 800 million users in three years.
The surprising part? The entire cluster was managed by just Avinash and three developers. A small team created, implemented, and operated hundreds of nodes of Cassandra. It's true DevOps in a true web-scale data center.
Now let's compare that to standard enterprise IT operations. Handling that many users and petabytes of data would take significantly more manpower. In fact, according to Gartner: "The 2016 average number of Raw TBs supported per storage FTE is 403." Extrapolating from this metric, we can assume a 7PB Cassandra cluster would require 17 full-time admins, which doesn't even take into account the "dev" side of Avinash's 4-person DevOps team.
Bottom line: The old way of managing IT infrastructure is broken. A new, hyperscale approach is needed to remain competitive. In fact Avinash's story inspired us to write an entire eBook on this topic. But this battle will be championed by developers, not IT admins.
Let me explain why.
The Case for Hyperscale First Starts With Hyperconverged
The original step in the modern infrastructure journey was pioneered by hyperconverged infrastructure (HCI), and Nutanix in particular. HCI incorporates software-defined technologies; scale-out, modular components; and unified management interfaces (UIs, APIs, automation, orchestration, etc). Although HCI collapses compute, storage, and networking into a single platform, it's the storage component that really drove HCI into mainstream IT.
To understand why storage drove the hyperconverged revolution, we first have to understand why storage is so challenging. Consider the following from a May 2017 ESG report. ESG conducted a research study of over 300 storage decision makers.
The following is how often respondents cited particular data storage challenges:
- 34%: Hardware costs
- 31%: Data protection
- 27%: Data migration
- 25%: Rapid data growth rate
All of these relate to the challenges of traditional, monolithic storage arrays. Yes, they're expensive from a capital perspective. But in totality, they're more expensive from an operational perspective. Makes sense since you need a specialized storage FTE for every 403 TBs! Enter hyperconverged infrastructure, which reduces both capex and opex with a modern, virtualization-first approach.
Hyperconverged Solves Some, but Not All, of Developers' Needs
What do developers need from infrastructure, and storage in particular? From the same ESG report, respondents familiar with SDS cited a number of operational benefits as well as additional developer-oriented benefits (see Figure below). Opex reduction, deployment simplification, storage management simplification, and capex reduction are cited as the first four benefits. Lightweight deployments for app dev, greater agility, and heterogeneity are cited as the next three benefits - all part of developer's infrastructure requirements.
HCI was aimed at solving the problems of IT operators, and specifically server, storage, and network admins. It's not particularly well suited to solving the problems of developers. Developers need:
- Self-service access to infrastructure provisioning that's "AWS-like."
- API-driven infrastructure that can be built into application workflows.
- Programmable infrastructure to accelerate test/dev to production release times.
- Ability to quickly clone and create sandbox environments for test/dev.
- Simultaneous support for a range of hypervisor, container, and cloud environments.
- Elements that can be scaled independently based on application-specific needs.
- Native integration with orchestration tools like Kubernetes, Docker, and Mesos.
In these environments, HCI can address #'s 1-4 above, but not #'s 5-7. Bundling the hypervisor, orchestration, and networking software with the storage software simplifies the environment, but limits scalability, flexibility, and extensibility.
Given these limitations, it's no surprise we're seeing a rise in hyperscale infrastructure in the enterprise. But hyperscale infrastructure is just an architectural approach. It's a methodology for decoupling components, deploying them as software-defined infrastructure, and scaling components independently based on growth and application requirements.
Does that sound familiar? It should. At the risk of throwing another IT vendor cliche out there, this describes a modern version of converged infrastructure.
Back to the Future: The Rise of Hyperscale Means Revisiting Converged Infrastructure
Converged infrastructure is a packaging choice for hyperscale technologies. Originally made popular by VCE VBlock, converged infrastructure takes industry-standard servers (usually blade servers), SAN or NAS arrays, and networking switching and bundles them together as pretested, pre-integrated components with VMware. Now you can scale things independently or in modular blocks knowing it will all work together. But this predates software-defined technology (except VMware, which is arguably software-defined compute).
To understand the difference between converged and hyperconverged, let's borrow again from ESG's paper. In their words:
Converged infrastructure provides the benefits of hyperconverged (operational simplicity, programmability, data protection and efficiency) combined with the benefits of hyperscale technologies (scalability, flexibility, and extensibility). In fact, ESG cites that 38% of IT decision makers believe that better scalability is a reason their organization would deploy converged infrastructure technology over a hyperconverged alternative.
When Developers Are King, Hyperscale and Converged Infrastructure Will Rule
Most enterprises need to support digital business, remain competitive, and continue to thrive in a world of disruption. As a result, like it or not, they're undergoing cloud transformation. And whether you call them Mode 2 apps, cloud-native apps, or just modern apps, cloud infrastructure requires a departure from traditional IT.
In this world, developers are king. It's about how quickly you can bring new digital services and apps to market. Exposing programmable, scalable infrastructure to developers is a necessity to achieve this velocity.
A shift is underway. Enterprises are focused on keeping developers productive. Although I don't think we'll see a noticeable slowdown in hyperconverged momentum, there will be a big uptick in hyperscale, converged infrastructures that provide developer-friendly environments.
Published at DZone with permission of Rob Whiteley , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.