As enterprises begin the serious work of moving beyond trial phases to truly integrate Internet of Things traffic, three key factors are coming to the fore. As I described in DZone's Guide to The Internet of Things Volume III, the first is that the future will bring a huge increase in the number of sensors, actuators, and devices broadly categorized as the Internet of Things (IoT). The amount of data generated by these new endpoints is staggering. But more importantly, a very large percentage of these devices will be too small, too cheap, too dumb, and too copious to run the hegemonic IPv6 protocol. But somehow, this traffic must still be incorporated into enterprises' networks.
Simultaneously, these same enterprises are increasingly moving their main networking and computing tasks to cloud network architectures. In order to maintain or increase control of these resources, many enterprises are rolling out variations of Software Defined Networking (SDN) technologies and protocols. Although standards and techniques vary with different SDN approaches, a common requirement is for computing power at the edges of the network to manage the interactions. A conflict is immediately obvious given these first two realities: a proliferation of simple devices with little to no computing power and memory and enterprise networking schemes that demand both!
The third factor is somewhat hidden from view, but it will become more important as enterprises seek to truly incorporate all the data from every kind of "thing." There are millions of legacy machine-to-machine (M2M) networks operating everything from robotic assembly lines to process control facilities. These run on idiosyncratic older protocols that are often implemented on purpose-built Programmable Logic Controllers (PLC). Although these M2M networks have long existed as independent islands, increasingly, enterprises wish to pull these into a broader IoT strategy. But these legacy networks were often designed with no thought to wider communication and management and demand local control and response.
Concomitantly, any solution addressing these three factors must also be exceptionally tolerant of disruption, mobility, and change. Enterprises placing "more eggs in one basket" will require that all data operation continue despite perturbations.
Abstracting Diverse Networks to Unify Them
The solution to these seemingly contradictory requirements is not to change the devices, as some have suggested. Many IoT devices can't be burdened with IPv6 and its accompanying costs of processing, memory, and power. Many legacy M2M networks likewise are based on simple devices and would require many person-years of software to fully participate in an IPv6 network, let alone SDN.
Instead, the answer is simple in concept, but demanding in implementation: The Abstracted Network. The Abstracted Network replaces the topologically separate networks — existing legacy M2M and enterprise networks, along with the emerging IoT — with a single network that creates the appearance of separation but with the benefits of integration.
Traditional networks are still often separate today based on whether they are handling human-oriented traffic (smartphones, tablets, computers, etc.) or M2M traffic (sensors, actuators, robots, etc.). As noted above, many M2M devices operate as "islands" or "silos" disconnected from the rest of the enterprise. These isolated networks have remained despite the rise of IP because of simplicity of the end devices or their peculiar communications and control requirements (See Figure 1 below).
Figure 1: Traditionally separate networks for human-oriented and machine-to-machine data.
It's perhaps immediately obvious that there are networking inefficiencies in these separated networks, but more critically for Enterprises, the legacy and IoT networks are not easily controlled and tuned by SDN software. Another hidden issue may be even more important in the long run: events and trends generated within the legacy and IoT networks are invisible to the primary Big Data servers handling the rest of the enterprise's activities. The power of the publish/discover/subscribe model I described in DZone’s Guide to The Internet of Things Volume III depends on incorporation of the broadest range of data sources and devices and uses a device called a propagator node to create the virtual network topology (Figure 2 below).
Figure 2: The Abstracted Network based on propagator nodes emulates separate networks but allows Enterprise control and publish/discover/subscribe.
Propagator nodes are similar to traditional networking devices, such as routers and switches, but also incorporate support for legacy and emerging IoT protocols as well as standards-based IPv6. Propagator nodes emulate the formerly separate networks' protocols, timing, and control interactions. This means that legacy and IoT devices that cannot incorporate higher-level protocols themselves may still function as part of the overall enterprise operational structure.
The Networks Are in the Database
A sophisticated distributed database is hosted in applications agents found in each propagator node. Built partially on discovery and tuned by the Enterprise's needs, the database forms a model of the logical interconnections and networking needs of each type of attached device (Figure 3 below). Network traffic flows and interactions are tracked to constantly update the abstracted model, creating an efficient virtual network architecture no matter the physical topology. This includes such details as latency, protocol translation, multicast pruning and forwarding, and even control loop management.
Figure 3: The Abstracted Network is created as a database within propagator nodes, virtualizing legacy and standards-based connections.
Next time, we'll take a deeper look at the Abstracted Network and what it offers for the future of enterprise and IoT. We'll examine how it's tuned, how to manage change and disruption, and maintaining M2M operation.