Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

The Future is Distributed: Decentralizing for a Better Cloud

DZone 's Guide to

The Future is Distributed: Decentralizing for a Better Cloud

Decentralizing the cloud is the only way to keep up with the demands of the current economic climate.

· Cloud Zone ·
Free Resource

When it comes to the public cloud, you can’t get around application deployment without leveraging Azure, AWS or GCP in some manner. Public cloud has proven that consistency, availability, and abstraction lead to better business growth, technical outcomes, and customer experiences. We see this with many unicorn IPOs that have launched earlier this year: Lyft, Pinterest, Slack, etc. They all rely on the public cloud. Despite its hefty price tag, businesses know it’s worth the cost for the flexibility and growth achieved since they can focus on their revenue generating applications and not on supporting their infrastructure.

Evolving the Public Cloud: Addressing 24/7 Economic Demands

When it comes to major cloud providers, they have spent years creating, supporting, and perfecting hyper-centralized, hyper-dense data centers. Because of this, we have found ourselves with massively centralized data in limited geographic locations close to the largest population centers. However, this has caused problems because our world has moved to a real-time, digital paradigm where the physical and digital worlds are colliding. Applications now do way more than just buy products, play a song or watch a video. They have become a digital extension of our physical world, requiring real-time responses for natural interactions and localized data processing due to bandwidth constraints. Herein lies the problem with the hyper-centralization of data.

The network latency and bandwidth required to support today’s use cases is simply not possible with a  highly centralized public cloud – even if we’re accounting for 5G. For one, there is simply not enough fiber in the ground and no one has solved that pesky speed of light problem! Public Cloud locations are at best 50 milliseconds (ms) from their users and add another 30-50ms when using 4G LTE technology. This all leads us to the next wave of infrastructure design, the distributed cloud.

What is the Distributed Cloud?

Already underway, the transition from centralized to distributed cloud is not being led by the major cloud providers. The distributed cloud is the hyper-localization of compute, network, and storage to the user while providing and maintaining the experiences, level of service, and abstraction provided by the Public Cloud.

Distributed cloud companies such as Fastly, CloudFlare, and Packet are leading the way for smaller regional service providers, leveraging hyper-scale “cloud-native” principles. Thanks to AWS, Azure, and GCP, organizations have learned how to manage infrastructure in an abstract way but the business model was never intended to extend to the edge. The main reason behind this would be the differences in design and mindset of managing 20 sites with 10,000+ devices vs. 10,000+ sites with 20 devices, not to mention the financials required to maintain so many sites.

When building the distributed cloud, you are limited by space, power, cost, and more importantly technical man-power. When deploying the network, you can’t send technical resources to set up every site at the bottom of a cell-tower, in a branch retail store, or a remote office. Hardware redundancy is limited by cost and remote moves, adds, and changes need to be hitless, reliable, and precisely aimed. You can’t risk upgrading BGP or the RIB when you are facing issues with LLDP. You can’t leave vulnerable network code in hundreds of unmanned sites and not have a way to fix it without impacting service.

Today’s 24/7 world will not tolerate app outages. It must be always-on in our always-available world. None of this can be done with legacy monolithic network software. Those software upgrades are generally complicated, require weeks or months of preparation, and are unreliable and risky when you don’t have local resources available to jump in when things go awry. This leaves operators in a tough place, implementing impactful updates in an attempt to manage results and expectations. Because of this, security vulnerability fixes are impossible to deploy in any reasonable amount of time, essentially opening up the attack vector. The modern solutions we have been supplied to manage these old designs have been an all or nothing centralized software-defined network controller. You can either have all forwarding and control-plane decisions made there or continue box by box management with limited API access or CLI-only control. There is no in-between.

Containerized Networks for Today’s Cloud Requirements

The public cloud’s centralized protocols are not helping us tackle today’s application demands and are only forwarding decisions that SDN architectures pushed us towards in the first place – complexity. Instead, we need to intelligently segment and containerize network services, understanding that networks should continue to have distributed control and data planes with standard protocols.

As the world continues to move from the centralized Public Cloud to the Distributed Cloud with hyper-localized data to the users, a new network operating system architecture and network is required. In this 24/7 world, where real-time applications are king, we need to embrace what we have learned from the Public Cloud and use it to enable network innovation and agility to meet the demands of the distributed one. Containerized hitless upgrades, software redundancy, Kubernetes management, and realistic costs are no longer “nice to haves”, they are absolute requirements.

Topics:
networking architecture ,network operations ,devops and cloud ,devops ,distributed cloud ,edge ,cloud native ,cloud

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}