Edge Computing: Public Cloud on 5G — the Grand Convergence

DZone 's Guide to

Edge Computing: Public Cloud on 5G — the Grand Convergence

5G cellular technology makes use of emerging edge computing technology like AWS's Wavelength or Azure's NEC to provide lower latency and closer data.

· Cloud Zone ·
Free Resource

man on cell tower

The future of cellular tech looks bright with 5G and edge computing

The closing months of 2019 saw a slew of services by AWS and Microsoft in their flagship events,  Reinvent and Microsoft Ignite. Notable among them were services leveraging 5G networks for running workloads in the 5G edge to provide sub-millisecond latency, higher bandwidth, and improved reliability. 

AWS has partnered with telecom service providers Verizon, Vodafone, SK Telecom, and KDDI  to provide AWS Wavelength and is in the process of adding more partners. As announced, AWS Wavelength will enable developers to build applications that serve end-users with single-digit latency over the 5G network.

Microsoft Azure's announcement was quieter but the service capability seemed to be similar. Microsoft Azure teamed up with AT&T and announced the deployment of their Network Edge Compute (NEC) platform in AT&T's Dallas facility.  

With 5G services set to be mainstream in this decade, we are beginning to see disruptive business models centered around a collaboration between the two principal parties in the ecosystem: the communication service provider (CSP) and the cloud vendor (like AWS or Microsoft Azure).

Both of these are edge computing solutions where the processing will be offloaded to the AWS and Azure infrastructures set up in the CSP’s premises.

You may also enjoy: Why Is Edge Computing Gaining Popularity Today?

Edge Computing

The term “edge” in edge computing denotes a deployment topology. Edge assumes many forms, starting from a humble smartphone to sophisticated sensors attached to buildings, cars, equipment — effectively everything, limited only by human imagination. They capture data feeds to downstream processing systems located in a core infrastructure, either in an on-premise data center or the public cloud. The central idea is multiple edge devices connecting to a central processing system/server, serving as essentially a centralized processing paradigm. 

Centralized Processing at Core Infrastructure

Centralized Processing at Core Infrastructure

Computing at the edge reverses this by offloading some or all of the processing near to the edge devices, thereby representing a decentralized or distributed processing approach. 

Decentralized Processing at Edge

Decentralized Processing at Edge

There can be multiple layers in the edge depending on how the traffic originating in the first touchpoint travels across multiple hops to reach the core infrastructure. Irrespective of which layer does the processing, they all come under the ambit of edge computing as long as the compute happens outside the core system.

5G Network — Powering Ultra Low Latency and Higher Throughput Workloads

Wireless networking standards have evolved over the years, starting with the 1G analog telecommunication standard introduced in 1979 and early 1980s. Every subsequent standard since then was a quantum leap over the previous in terms of providing higher speed, improved reliability, and new features. 5G is the fifth generation standard of cellular network. A 5G network is characterized by three key features as shown in the figure below:

Three 5G network features

Three 5G network features  

The below figure represents some of the technology components enabling 5G.

Components enabling 5G

Components enabling 5G

5G technology on the high-band uses millimeter waves to transmit messages on a much higher frequency (30 GHz and above) using smaller cell towers placed at much closer vicinity than the mobile towers of today.

Cloud Services at 5G Network Edge

Although 5G provides speed, traffic originating at the edge endpoint still needs to traverse multiple hops through the cell tower, mobile network, and multiple aggregators before getting out into the internet and reaching the cloud vendor's network, adding latency in double-digit milliseconds.

Multiple data hops

Multiple data hops


While this is acceptable for traditional workloads, it falls short of providing the desired experience in latency-critical workloads for domains like industrial IoT, automated cars, online games using video streaming, AR/VR, and more. The success of these use cases hinges on processing a high volume of data packets in sub-millisecond duration, also called ultra-low latency.

Processing on Cloud vendor's infrastructure within CSP network

Processing on cloud vendor's infrastructure within CSP network

In order to cut down the additional hops outside the mobile network, the cloud vendor's infrastructure is placed in the CSP's premises. The edge traffic can now be processed in the cloud vendor's infrastructure located within the CSP premises without leaving the mobile network and in the process completely eliminating the extra latency required to reach the cloud vendor's network.  

Apart from reducing latency, some of the other benefits of processing data at the edge are: 

  1. Avoiding network jitter: Traffic going over the internet is subject to network jitters which can completely ruin the user experience in real-time streaming systems 
  2. Ultra-low latency requirements for mission-critical use cases: Mission-critical systems like autonomous cars or industrial IoT rely on services with ultra-low latency and strict QoS which can only be achieved if the traffic remains completely within the CSP's network
  3. Regulatory constraints: In heavily regulated industries like banking and health insurance, certain customer data cannot be sent to the core network and needs to be processed atthe edge.
  4. ML at the edge: Machine learning inference use cases need to send a lot of data to the core for processing. Processing at the edge by leveraging the ML services of the cloud vendor saves a lot of backhaul bandwidth.

Developer Perspective

Both AWS and Azure promise to provide a consistent developer and manageability experience in the 5G edge by providing the same compute and storage services available in the core platform. According to the AWS announcement, EC2, ECS, and EKS will be available with more to be added in the future. On Azure, the same set of tools, including Azure DevOps and Azure Kubernetes Service (AKS), can be used to build applications on the edge and the developer will deploy the workloads similar to deploying in any other Azure region. 

AWS Wavelength

AWS provides Wavelength Zone, a physical site in the CSP network consisting of AWS infrastructure deployed in the CSP's proprietary network.

The Wavelength Zone is modeled similarly to an availability zone. It is tethered to a region. You create a VPC in a region and select the Wavelength Zone while creating a subnet in the same way as we are used to creating a subnet in the availability zone. We then associate compute resources in that subnet to process latency-sensitive workloads.

Azure Network Edge Compute (NEC)

Microsoft positioned Network Edge Compute (NEC) as its edge computing platform deployed in the CSP's network. Azure compute infrastructure is placed a single hop away from the 5G's core network in the carrier's premises and uses the carrier's network. This is built as a smaller edge compute platform based on the Azure Stack Edge Platform.

A related variant Multi-access Edge Compute (MEC) was also announced which can be deployed at network edge of customers on-premises network. 


Edge computing coupled with multiple variants of edge adds an altogether new dimension, the decision on the workload placement. AWS provides three infrastructure options

  1. AWS Wavelength Zone - for running applications with sub-millisecond latency on the 5G Edge
  2. AWS Local Zone - for running applications deployed closer to end-users with single-digit latency 
  3. AWS Outposts - for running applications that need to remain on-premises due to latency and regulatory requirements.

Corresponding options in Azure are:

1. Azure Network Edge - for running applications with sub milliseconds latency on the 5G Edge

2. Azure Stack - for running applications that need to remain on-premises due to latency and regulatory requirements.

The decision on workload placement needs to be taken based on requirements of latency, payload volume, security, and other factors depending on use cases. Here also we can use the 5 Pillars of the AWS Well-Architected Framework. 

Taking AWS as an example, the architectural recommendations are:

  • Reliability: We need to build resilience in the system and design for graceful fallback to another wavelength zone or conventional AZ on service failing health checks.
  • Performance: The latency requirements of the application will drive the decision on deploying the workload on a conventional Availability Zone, AWS outpost, AWS local or AWS Wavelength Zone as described above. An application built using a microservices architecture is best placed to take advantage of the flexibility of deploying in different infrastructures based on their latency requirements because of its distributed nature.
  • Security: We should store only customer-sensitive data edge and define data lifecycle policies around it.
  • Cost: The price for creating resources and running workload on edge will differ from the parent region. The cost and usage report need to be monitored to derive effective inputs for cost control and apply optimizations by moving the workload to different zone if required.
  • Operational Excellence: We need to consider workload location as filtering or aggregation attribute for monitoring, reporting, automation and archival of compute systems and storage used.


Edge computing has been gaining a lot of traction during the last few years. Both AWS and Azure had quite a few edge services in the IoT domain viz Greengrass, FreeRTOS and Azure Stack Edge. What is significant about AWS Wavelength and Azure Network Edge Compute (NEC) is the flexibility of selecting the location of infrastructure-at edge or on public cloud for deploying our workloads depending on runtime requirements -like latency, payload volume, data residency for executing the use case as described in the previous sections. This coupled with the seamless capability to deploy conventional solutions like containers and Virtual Machines on the edge.

The advent of 5G accompanied by supporting technologies has enabled the CSPs to provide digital services and drive innovation beyond traditional networking technologies. They are however dependent on the scale out capabilities and a vast plethora of managed services provided by the cloud vendors. Cloud vendors, on the other hand, will struggle to build and operate telecom networks.

Both parties need to build on each other's core capabilities to offer products of value. In the absence of these services, the customers need to get into direct arrangement with different telecom providers to deploy their workloads on their edge which is expensive, tedious and hampers agility. This space is expected to heat up and get more interesting in the coming decade. 


3GPP: Third Generation Partnership Project. This is a standard body which defines standards and protocols for telecommunication networks.

Backhaul: Process of connecting the air interfaces to wireline networks leading to servers in datacenter processing requests originating in the edge.

Multi-access edge computing (MEC): The foundation concept of running applications in the telecom provider's network edge is called multi-access edge computing, or MEC. MEC has evolved over the years from a strictly hardware-based appliance running proprietary software to the current state-driven by virtualization. 

NFV: Network Functions Virtualization refers to virtualization of network hardware comprising routers, switches, firewalls running embedded software.

CUPS: This concept is called CUPS (Control and User Plane Separation) by the standards body 3GPP. Virtualization at MEC enabled the separation of this component into: 

  1. Control plane function containing the hardware (think network switches, physical routers, etc.) 
  2. User plane function software-based platform where applications run

RAN: Radio access network: A component of telecommunications network architecture comprising the base station and antenna which forward traffic to the core network 

Network Slicing: Network slicing enables multiple logical networks to run over single physical network 

NR: New radio access technology (RAT) developed by 3GPP for the 5G (fifth generation) mobile network. Similar to 4G LTE,  5G is referred to as 5G NR.

Disclaimer: These services are not available for use as of this writing. The information here is my interpretation of AWS Wavelength and Azure Network Edge Compute based on the official AWS/Azure documentation and product announcement.  

Further Reading

5G Infrastructure: Future of Most Disruptive Force in Wireless Technology

How Will 5G Impact Mobile App Development?

5g, 5g infrastructure, aws cloud computing, azure, cloud, edge architecture, edge computing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}