Trend Alert: How Edge Computing Is Propelling IoT
One of the keys to IoT's success, and perhaps that of the Internet, is edge computing. This overview shows how it works and why it's so important to IoT apps and networks.
Join the DZone community and get the full member experience.Join For Free
We won’t rehash the amazing, too-incredible-to-believe statistics about the IoT’s growth. Nor will we crow about the rosiest projections for the IoT’s potential. Plenty has been written on all this, some destined to be out of date by the time it’s published.
Instead, let’s focus on edge computing, a crucial trend that’s driving the Internet of Things forward and enabling large-scale IoT application deployments on ever more dizzying scales.
What Is Edge Computing?
Edge computing is also known as “fog computing.” That’s a more accurate frame for its logical extreme: a world in which every Internet-connected device is a mobile data center that processes vast amounts of information on the spot, without help from the cloud.
We’re not quite there yet. Today, edge computing is about distributed data processing. Rather than massive, distant data centers to process vast amounts of information, edge computing leverages smaller, more proximate data hubs that can work faster and smarter.
Edge computing evangelists like RedBird Capital Partners’ Gerry Cardinale see the writing on the wall. With partners like the Ontario Teachers Pension Plan, Cardinale’s company makes targeted investments in forward-thinking data center operators positioned to capitalize on the edge computing revolution.
RedBird’s most recent investment, Compass Datacenters, builds and outfits turnkey data centers for midsize and enterprise-grade companies seeking a literal and figurative edge over the competition. Those hubs make possible a slew of large-scale IoT applications that thus far have eluded cost-effective deployment.
But they’re still just scratching the surface.
6 Arguments for Edge Computing
Why is edge computing increasingly seen as the Internet’s next iteration — as the successor to the cloud, for lack of a better frame?
These are among the most commonly cited arguments:
1. Lower Latency
Modern computing is a game of milliseconds. (And, in many cases, even smaller slices of time.) Latency, essentially the lag between impulse and action, is the enemy of efficiency. It’s impossible to completely eliminate latency, but one way to significantly reduce it is to move data processing closer to the edge — and away from centralized data hubs that can all too often serve as data bottlenecks.
2. Better Bandwidth Utilization
Transmitting vast amounts of data from the point of origination to massive data centers requires lots of bandwidth. And, despite rapid and ongoing increases, bandwidth remains finite. In a world where the availability of data is likely to outpace bandwidth expansion for the foreseeable future, bandwidth efficiency is critical. By reducing transmission distance and load, edge computing makes that possible.
3. Faster Data Analysis
Edge computing facilitates faster data analysis at the point of origination. As IoT devices add more processing power and gobble up data from their environments, sending it to and from centralized hubs will quickly become prohibitive. (It already is, in many cases.)
4. Greater Diversity of IoT Applications
It’s a chicken-egg problem: the rapidly expanding Internet of Things necessitates edge computing, even as edge computing makes possible a whole new range of IoT applications.
One of the most striking examples: the energy extraction industry. Most people never visit oilfields; laypeople that do are shocked to discover just how many land- and air-based sensors swarm around these critical pieces of infrastructure. Increasingly, the data these sensors collect will be analyzed on site. In the not-too-distant future, something similar is likely to happen in the agriculture industry as the world’s breadbaskets sprout digital “skins” comprised of billions of ground-based sensors extracting untold amounts of data from the soil and air.
5. Smarter Networks
With data collection and processing siloed at the edge, the vast amount of computing power available at centralized data hubs is free for a higher use: large-scale, high-level analysis that illuminates previously hidden or obscure problems and trends. That’s the first step toward a truly intelligent, global network — eventually populated by tens or hundreds of billions of Internet-connected devices — that gets smarter every day.
6. Better Performance in High-Stakes Situations
Distributed computing power really shines in high-stakes situations where life and limb are on the line: self-driving cars (which must make millions of split-second calculations to keep passengers and pedestrians safe) and disaster situations where every minute counts, to name but two use cases. Turns out living on the edge is safer than it sounds.
Are you ready for the edge computing revolution?
Opinions expressed by DZone contributors are their own.