Choosing a Connectivity Standard
Here's everything you need to know when selecting a connectivity standard.
Join the DZone community and get the full member experience.Join For Free
The Industrial IoT (IIoT) is hard to define. It includes all connected devices across industries, which is arguably the future for all sectors. But defined or not, the IIoT is immense. Perhaps more surprisingly, there are already standards targeting almost every application. But the space is far too big to expect a single connectivity standard to span everything.
That implies two problems: first, current projects and applications need guidance to understand the different options and thus start off with the right standard. Longer term, to build a true Internet, we will eventually need to connect subsystems based on different standards.
The largest consortium in the industry, the Industrial Internet Consortium (IIC), brought together dozens of experts for a multi-year effort to understand IIoT connectivity. The resulting IICF (Industrial Internet Connectivity Framework) IICF focuses on the layers above the network packet exchange. It addresses both the longer term “true Internet architecture” problem and the shorter-term “understanding” problem. To define an interconnected future, it defines an architecture for integrating technologies. To aid understanding, the IICF defines a new stack, outlines the key aspects that differentiate technologies, presents a framework for evaluating technologies against those aspects, and dives deeply into analyzing the key standards. There is no comparable work in the space.
Future Integrated Architecture
The IICF proposes a future Internet by combining “Core Connectivity Standards” (CCS). Importantly, the IICF recognizes that it’s not practical to bridge every one of the dozens of IIoT connectivity standards with every other standard. This leads to an “N-squared” problem, since each new standard requires a bridge to every other standard, requiring N custom bridges. The CCS design eliminates the “N-squared” problem by choosing a few standards that, together, span the space and separately provide key functionality. The architecture then connects core standards through well-defined standardized bridges called “core gateways.” Each of the myriad other connectivity technologies need only then interface to the system through any one CCS. This enables practical end-to-end data exchange.
The IICF’s Core Connectivity Standards architecture proposes to connect a few highly-capable standards together with standardized gateways. Then, the many other standards can integrate into the system via any of the core standards. This allows a future Industrial Internet of Things that spans a huge application space.
This design enables a scalable, deeply connected future Industrial Internet of Things. Of course, it requires that we carefully choose only a handful of core standards. The IICF requires a connectivity core standard to fit some key criteria:
- A CCS must provide “syntactic” interoperability. That means that the standard must support interchange of complex data types between dissimilar systems. This enables a key level of interoperability; it lets participants communicate independently of each other’s implementation details.
- A CCS must be an open standard with strong independent, international governance. It must have multiple implementations with support for validating or testing interoperability.
- A CCS should be applicable across multiple industries. Some may only have sufficient traction in one industry, but then, it must have potential in others.
- The CCS must be proven and deployed across multiple applications, ideally in different vertical industries. Successful IIoT connectivity standards take many years to develop, more years to deploy, and even more to confirm usefulness. The future IIoT may be a distant vision, but it’s impractical to expect an entirely new design to be relevant for decades.
- The CCS must be practical to integrate with others. It must have a path to standard core gateways to all other CCS standards, including industry and standards organization support.
With that definition, the IIC experts proceeded to define criteria, survey standards, and evaluate standards against those criteria. Four standards were chosen as CCS candidates.
Choosing a Standard
The IIC team analyzed the six standards that stood out has having enough IIoT traction: Data Distribution Service (DDS), OPC Unified Architecture (OPC UA), one Machine-to-Machine (oneM2M), Representational State Transfer HyperText Transfer Protocol (RESTful HTTP), Message Queuing Telemetry Transport (MQTT), and Constrained Application Protocol (CoAP). The first four of those were designated as Core Connectivity Standards per the above requirements. The other two are, of course, still important, as are others that were not analyzed. But the four chosen best satisfied the criteria and do a good job of spanning the application space.
Digging into these technologies revealed another somewhat surprising result: they essentially don’t overlap. In other words, they solve different problems for different application types. Since the connectivity options are so different, in most use cases, there is really no choice in connectivity technology. This makes an architect’s task much simpler. The real problem isn’t choosing between similar options; it is understanding the different options and overcoming biases. If you understand, the choice is clear.
With the CCS design, system implementers can choose the best-fit standard to their current application. In the future, these will be connected via the core gateways. There are already many core gateway standards defined between the CCS standards. Thus, this is a low-risk path for current designs.
Of course, that leaves the question of which standard to start with today. Below, we will go through each of these standards and quickly describe how to choose one for your application.
DDS is a series of standards managed by the Object Management Group (OMG) that define a databus. A databus is data-centric information flow control. It’s a similar concept to a database, which is data-centric information storage. The key difference: a database searches old information by relating properties of stored data. A databus finds future information by filtering properties of the incoming data. Both understand the data contents and let applications act directly on and through the data rather than with each other. Applications using either a database or a databus do not have a direct relationship with peer applications.
The databus uses knowledge of the structure, contents, and demands on data to manage dataflow. It can, for instance, resolve redundancy to support multiple sources, sinks, and networks. The databus can control Quality of Service (QoS) like update rate, reliability, and guaranteed notification of data liveliness. It can look at the data inside the updates and optimize how to send them, or decide not to send them at all. It can also discover and secure data flows dynamically. These things define interaction between software modules. The data-centric paradigm thus enables software integration.
Data-centric DDS best fits software-intensive applications that communicate between many devices. It supports redundancy easily, and there are no servers to locate, configure, provision, reboot, choke, or fail. So, it’s good for “never fail” applications. Peer-to-peer communications are very fast. And, it scales well. Most users are teams of software engineers. Many applications are building intelligent “edge” systems such as automatically-controlled power control, connected medical devices, robotics, and autonomous vehicles.
OPC UA is a standard managed by the OPC Foundation, also documented as IEC 62541.
OPC UA targets device interoperability. Rather than accessing devices directly through proprietary application program interfaces (APIs), OPC UA defines standard APIs that allow changing device types or vendors. This also lets higher-level applications, such as human-machine interfaces (HMI), find, connect to, and control the various devices in factories.
OPC UA divides system software into clients and servers. The servers usually reside on a device or higher-level Programmable Logic Controller (PLC). They provide a way to access the device through a standard “device model.” There are standard device models for dozens of types of devices from sensors to feedback controllers. Each manufacturer is responsible for providing the server that maps the generic device model to its particular device. The servers expose a standardized object-oriented, remotely-callable API that implements the device model.
Most OPC UA applications are in discrete manufacturing. The device models help integrate interchangeable devices and software components like HMIs and historians. OPC UA targets workcell integration of typically no more than 10-20 devices. The address model and object-oriented nature directly support a hierarchy of these workcells. Users of the other standards rarely characterize their use cases as “workcells." Most all users of the Application Programmer Interface (API) are building a device rather than a final system. Most integration users are plant or process engineers, rather than software teams.
OneM2M targets the mobile networks supported by service providers like Telcos. It provides a common service layer that sits between applications and connectivity transport. Its emphasis is on providing common services on top of different connectivity standards.
The core design of oneM2M is to define services that mobile devices can use to cooperate and integrate. They run in the platform layer (cloud) supported by mobile service providers. OneM2M abstracts differences in protocols to mobile devices. Thus, it can integrate different ways to connect to similar devices. While most all the other standards support IP over cell networks, most oneM2M teams consider the cellular network as their primary connection technology.
REST over HTTP is the most common interface between consumer applications and web services. REST is an architectural pattern for accessing and modifying an object or resource. One server usually controls the object; others request a “representation” and may then send requests to create, modify, or delete the object.
REST is the most widespread way to build web services, so there are many copious offerings to help developers. It’s especially good for HMI development. In the IIoT, most applications that rely on REST for their primary connectivity have relatively simple connections, for instance from one device to a cloud service. Another common use case is to put an HTTP server on the device and connect to it for configuration. REST over HTTP isn’t particularly fast; most users are building applications that involve humans at human speeds.
MQTT is a very simple protocol designed mostly for the “data collection” use case. It does not qualify as a “core connectivity standard” per the IICF guidelines, because it has no standard type system. Without a type system, it cannot offer a standard ability to interoperate at the “syntactic” data-structure level, leaving all data interpretation to the application. MQTT is a “hub and spoke” design, with devices talking to a central broker.
Nonetheless, because of its simplicity, MQTT is good for many applications. It provides little functionality, so most applications do not have complex software challenges. It’s not that easy to connect between “spokes,” so most applications don’t have much device-device interaction. It does scale reasonably. Because of all these factors, the most common use case for MQTT is to data collection from field devices, often to a cloud server.
CoAP is a RESTful connectivity transport standard inspired by HTTP but is designed to be more lightweight and efficient. CoAP is designed to interoperate with HTTP and the RESTful web services through simple proxies. The primary use case is to use REST communications but with a lighter-weight protocol than HTTP over TCP. It simplifies connection to web services.
Like MQTT, CoAP lacks a type system and thus does not qualify as a Core Connectivity Standard.
CoAP targets low-power, wireless devices with few resources. CoAP is the least “chatty” of the options. Most implementations target low-power wireless specifications like 6LoWPAN. CoAP isn’t particularly fast, but most applications run on batteries, so they favor low power over speed.
Most users of CoAP are building battery-operated devices that communicate with humans or business systems using web services.
Comparing these technologies highlights the stark differences and non-overlapping nature of connectivity approaches.
For instance, OPC UA is object oriented (OO), while DDS is data centric. Those are diametric opposites. The object-oriented mantra is “encapsulate data, expose methods.” Data centricity is all about exposing data, and there are no user-defined methods. The only methods are defined by the standard.
OPC UA targets final device-centric integration by plant engineers. It offers easy interoperability between devices from different vendors. By contrast, DDS targets final data-centric software integration by software teams. As intelligent software gains importance, DDS provides the global data abstraction and dataflow interface control that software teams need.
OneM2M and RESTful HTTP aim at connection from the edge to cloud services. Unlike DDS or OPC UA, neither is often used for device-device communications. They truly operate in entirely different spaces. OneM2M works by offering common services aimed at integrating mobile devices. None of the other technologies target this application.
Neither MQTT nor CoAP have a standard type system. Thus, they leave interoperability to the applications. You should consider these only if you control the interfaces between all devices and servers.
MQTT applications mostly target data collection from devices to a central store or analysis function. This is an overly simple application for either OPC UA or DDS, which work between devices.
Finally, while all the technologies have connections to web services, CoAP is the only one that directly implements device connectivity with a RESTful pattern. It ideally fits that use case.
Looking at the differences, it’s clear that these technologies simply do not compete in practice. However, the level of “confusional competition” is amazing. The various vendors and standards organizations, in general, do not help. Their positioning often uses similar words for vastly different concepts. Common terms like “publish subscribe” hide huge differences in types of information, discovery, selection of data, and QoS control. “Real time” without specifying a time period like milliseconds or minutes is meaningless. Additionally, the Internet of “Things” leaves a huge range of “things” to the user. These confused terms cause many to think, for instance that OPC UA “pubsub” is similar to DDS or CoAP is similar to MQTT. These perceptions are far from accurate. These technologies are not different approaches to the same problem, rather they address entirely different problems. Hopefully, this article can help resolve some of this confusion, as most applications best fit one, and usually only one, of these popular standards.
The beginnings of the interconnected future are already in place. Most of the technologies have bridges to RESTful HTTP and web services already. The OMG recently adopted a standard for a gateway between DDS and OPC UA. Connecting to wireless networks will push integration with oneM2M as well.
Nonetheless, if you are developing an application today, it’s critical to spend some time understanding the options. Choosing the wrong technology can be very painful. Worse, the poor choice may not be obvious for months or years; scalability, performance, and reliability problems are rarely obvious in small test systems. It is very worthwhile to study these technologies before making a choice. Early and careful consideration of the capabilities, proven use cases, and target users can save painful redesign or project failure.
Opinions expressed by DZone contributors are their own.