Not every major technology trend will land. Some of them are well-intentioned but difficult to implement. Others appear to be solutions looking for problems. But when you find a couple of technologies that pair well together, it can accelerate both of them as they bring co-conspirators together in pursuit of something better.
And that is what appears to be happening with IoT and multi-access edge computing (MEC).
First, Some Myths
People talk about IoT and MEC in ways that sometimes don’t make a lot of sense. The one that gets me the most is the connected car. The story usually goes something like this: cars won’t be able to send all the road data to the cloud if they need to make steering decisions in real time, so there will have to be an edge device.
First, the machine learning (and eventually artificial intelligence) required to make this work has two parts: training the model and using the model. To train the model, you have to get lots of data to some pool of resources (the “cloud”—in quotes here because there is a lot that could mean). But to drive, you are going to have an application that lives locally, using the model to dictate behavior. If the car is not connected, it will work, because there is no real-time requirement to stream that data to some distant cloud resource. So when people talk about self-driving cars and MEC, they are kind of conflating things.
You see, the ME in MEC means that there is a connectivity component. If you run everything locally, then you are just left with C(omputing), which is basically an application. If we start referring to all applications as MEC and IoT, we are basically buzz-washing everything as we start the Great Marketing War of 2017/18.
IoT and MEC
While some of the hype is certainly overblown, there are all kinds of use cases where IoT and MEC make fantastic bedfellows.
You can imagine a smart city using Wi-Fi or LTE connectivity for retail centers. Access points are a great way to determine location. So if you had a connected section of a downtown area, users who access the Wi-Fi would basically be transmitting their location. You might want to use their real-time location to serve up contextual content, ranging from deals (the nearby bar is offering a drink special!) to traffic coordination (using an application rather than law enforcement to clear an area before or after a large event). If you are trying to intercept someone at precisely the time they need to make a decision, then having applications run locally based on data served up nearby makes perfect sense.
Or perhaps there is a set of IoT sensors distributed across a geographically remote area. Consider a mining area or a manufacturing plant. There might be millions of sensors that are distributed across the area, connected with WiFi to local IoT gateways that themselves are connected up to an access network. If these sensors provide information critical in either driving real-time visibility (as with safety systems) or even real-time automatic controls, it makes perfect sense to host those applications locally. And if there is no local datacenter, then MEC is a good compromise, especially if behavior needs to continue in the event of the WAN link rendering the public cloud inaccessible.
Which Comes First?
This really depends on whether you are driven by an application that could be improved by more information, or information that could use better application performance.
For most companies, people might imagine that they are swimming in useful data, but the majority of data sources are still fairly basic back office applications that track things like customer behavior and inventory. And so the problems that tend to rise up are things like how to connect branches back to these big systems, which is why SD-WAN is so hot.
If this is the case, then using SD-WAN as a means to create additional compute and storage surface is probably the right way to go. There might not be a heavy distribute sensor load, but building out the connectivity solutions as if there is will prepare you for more real-time IoT-type applications at some point in the future. Minimally, it forces you to rethink what constitutes a cloud, allowing you to take advantage of distributed resources.
For companies that are already surfacing lots of telemetry information, the question is likely more about what to do with all that information. The ideas will start with periodic tuning (identifying maintenance opportunities, for instance), and gradually move closer to real-time operations. In these cases, the problem to solve for initially is a collection of IoT data.
But even in this case, that collection will likely favor aggregation points connecting to some access solution. It means that, minimally, people ought to be considering what those IoT gateways look like. Does it make sense, for example, to include even small amounts of distributed compute and storage in anticipation of real-time application requirements? In rugged environments especially, it might not be practical to rely on a distant cloud that, when unavailable, might require days or weeks for remediation.
The Bottom Line
The basic point here is that having lots of data is going to eventually require doing something with that data. If the data is going to be used for periodic updates to train models, then having connectivity is probably sufficient. But if you imagine using that information at or near real-time, then architectural planning ought to consider how to insert small compute and storage surfaces as a matter of normal upgrade.
Of course, there will be a longer-term requirement around management and orchestration of those workload surfaces, which starts to expand the conversation beyond mere connectivity. The clever architect will want to at least be certain that current decisions are not precluding future actions.
The real punch line? Planning even a bit ahead when making what seems like pure connectivity decisions, will save you a lot of time and effort, especially if your deployments are going to be in hard-to-reach locations.
(BTW if you looked at the photo, I just Rickrolled you).