Understanding the Future of the Data Center Edge
Understanding the Future of the Data Center Edge
In this article, take a look at understanding the future of the Data Center Edge.
Join the DZone community and get the full member experience.Join For Free
With the adoption of Kubernetes and microservices, the edge has evolved from simple hardware load balancers to a full stack of hardware and software proxies that comprise API Gateways, content delivery networks, and load balancers. Understanding this transformation is key for data center executives so they can make the right architecture, strategy, and operations decisions. To get an understanding of the transformation, a quick historical journey helps.
The Early Internet and Load Balancers
In the mid-1990s, web application architecture was in its infancy. The classic n-tier architecture, consisting of a database tier, application tier, and presentation tier, was the de facto application architecture of this time. The n-tier architecture was horizontally scalable by connecting multiple instances of an application or presentation tier to the Internet using the first iteration of the data center edge: a load balancer. In this era, the load balancer was responsible for routing traffic between different instances of the application, ensuring high availability and scalability. The load balancer was typically a hardware appliance, although the release of HAProxy in 2001 started to popularize the concept of software load balancers.
Rise of the ADCs and Web 2.0
These shifts in networking — encrypted communications and many requests over longer-lived connections — drove evolution of the edge from the standard hardware/software load balancer to more specialized application delivery controllers (ADCs). ADCs included a variety of functionality for so-called application acceleration, including SSL offload, caching, and compression.
Getting to Web-Scale
In the early 2010s, a number of cloud-first companies experienced exponential growth in their user base. The software behind these companies was originally architected as monolithic web applications. As their user bases swelled to astronomical numbers, these companies found that web-scale problems were indeed a different type of problem that dictated a different architecture. Companies such as Twitter, Facebook, and New Relic started to refactor key pieces of functionality out of their monolith into independently deployed services.
By deploying critical business functionality as services, these organizations were able to independently scale and manage different aspects of their overall application. Traffic to these independent services was routed through the monolith. Any changes to routing meant that developers frequently had to redeploy the entire monolith. This acted as a bottleneck for speed of change but at least solved the scale problem.
API Gateway to the Rescue
Cutting edge organizations that had solved the scale problem were now faced with solving the monolith problem that was slowing their application development. One of the learnings from these architectures was fairly obvious — for the refactored services, the monolith was simply functioning as a router. This observation sparked the development of early API gateways.
An API gateway performed the routing functionality that was in the original monolith, creating a common facade for the entire application. Cross-cutting application-level functionality such as rate limiting, authentication, and routing was centralized in the API gateway. This reduced the amount of duplicative functionality required in each of the individual services.
The Cloud-Native Era: Microservices
The monolith had become a minilith, but it still existed and slowed down application development and deployment. Microservices stepped in to solve this issue. Each microservice represents a self-contained business function and is developed and released independently of the other microservices of an application.
By decoupling development cycles, microservices enable organizations to scale their software development processes more efficiently for the cloud. Given that microservices can be deployed in multiple environments: virtual machines, bare metal, containers, as functions — API gateways play a critical role in routing traffic to the right microservice.
Now to the Future — Moving to Full Cycle Development and the Cloud-Native Workflow
Microservices aren’t just a shift in the application architecture. Microservices are also a shift in the development workflow. Teams are responsible for the full software development lifecycle — from design to development to testing to deployment and release. Some organizations put these teams as part of the on-call rotation (“aka you build it, you run it”). This development model, popularized as full-cycle development by Netflix, is a transformational shift in how software is developed and shipped.
This shift in workflow also has implications for the data center edge. Not only do API gateways (and other elements of the edge stack) need to adapt to a microservices architecture, the entire edge needs to be accessible and manageable by full-cycle development teams. This management includes routing (which version of service should receive production traffic) as well as finer-grained controls such as weighted routing (needed for canary releases) and traffic shadowing (create a copy of the traffic to a test version of a service for testing purposes). By giving development teams the ability to manage release and deployment, organizations are able to scale these processes to support even highly complex applications.
Full-cycle development teams also frequently have operational responsibility for their microservices. Critical to this success is real-time visibility into the performance of their microservice. The edge provides important insight into the behavior of a microservice by virtue of analyzing all traffic that flows to and from a microservice. This enables the edge to report on metrics such as latency, throughput, and error rates, providing insight into application health.
Edge Policy Management is Critical
Given the importance of the edge in modern cloud-native workflows, how do full-cycle development teams manage the edge? Unfortunately, all components of the edge stack have traditionally been managed by operations, and operational interfaces are a poor fit for application developers on full-cycle development teams. In addition, edge components are often operated in isolation, without a cohesive operational interface. After all, the full cycle developers are not full-time operators; they just need to be able to operate the edge machinery for their specific needs.
Fortunately, the Kubernetes ecosystem can provide guidance and inspiration. Using common YAML-based configuration language, developers can manage their own edge configurations while centralized operations teams enforce global policies. This is the latest evolution of the edge that leading-edge teams are employing to deliver rapid development cycles.
Want to Learn More?
This blog post summarizes some excellent thinking on the future of the edge by Richard Li, CEO and Daniel Bryant, Product Architect on the Ambassador team. The topic was presented at QCON London and is worth watching.
Published at DZone with permission of Bryan Semple . See the original article here.
Opinions expressed by DZone contributors are their own.