Over a million developers have joined DZone.

Integration Across Shared Control Planes

· Java Zone

Navigate the Maze of the End-User Experience and pick up this APM Essential guide, brought to you in partnership with CA Technologies

If the future of federated controllers is based on service layering, then how do multiple controllers manage the same device? Is there a requirement for state synchronization? Do they share information about device operation or configuration? Is there a need for controllers that are managing different aspects of the same device to be coordinated in what they do?

As with anything worth asking, the answer is: it depends. It is certainly the case that in a tiered controller architecture where one controller is managing things like basic configuration and another controller is working higher up the stack (managing a service, for instance), there is not a need to keep high fidelity replications of state across both controllers. Specific configuration information that might be important to the lower-level controller can likely be spared from the services controllers. In the cases that the services controller needs to know things like configuration state (it might be required to know VLAN provisioning, for instance), the services controller can likely query the device to get the state.

In this model, the device itself becomes the synchronization point. It serves, to some extent, as a state storage mechanism. And when the higher-level controller needs state, it can query the actual device to get it.

But does this mean that controllers do not need to interoperate at all?

Here, the answer is less clear. It seems that if multiple controllers are managing a device in support of the same set of applications, they ought to at least have the same view of the application requirements. This begs less for explicit state synchronization and more for a common understanding of the applications and services that run atop the network. For this to be possible, there has to be a common application abstraction that, even if not shared, is known by both controllers.

Controllers with different (or worse, competing) views of application requirements might not be able to work together. There could be situations where contention arises. In those cases, which controller gets precedence? If we follow networking best practice in other areas, the local (or lowest level) controller would drive behavior in these situations. But that could compromise higher-level applications and services.

Instead, we ought to consider a model where there is a common application or service abstraction that is learned by the controllers. It is less important whether these are independently learned or if they are shared from one controller to another. But having a common view of application SLAs makes it at least possible to take action against one without contradicting another.

But once the application requirements are known, how is activity between the controllers orchestrated?

To see Plexxi’s integration with OpenDaylight, tune into the March 14 live demonstration on SDNCentral. For full details, check out the event registration page.

There need not necessarily be a tight integration between the controllers. It is likely sufficient that data be shared. This begs for a data sharing scheme. That data sharing can be tightly integrated (as with a tightly integrated vertical stack), or it can be loosely integrated to allow different components to work together. Either one is a valid approach.

The SDN winds seem to be blowing more towards loosely coupled systems. The fear in tightly coupled solutions is that individual components are not as easily replaced should a customer want to swap one thing out for another. If this trend holds, it means that controller work that is ongoing will need to add some basic data sharing capabilities. When I say data sharing here, I don’t necessarily mean to limit it to a common data model that is driving the entire system. That is absolutely needed, but there will be additional data that needs to be shared that might not make sense in a central data model.

Imagine that something changes in the physical or even virtual topology – a link goes down, a new device is added, a MAC or VM moves. We might not want to store full topological information centrally. And even if we want to store all of that centrally, the problem becomes state synchronization (a difficult problem to solve in distributed systems). It should be possible to send information from whichever node has data to other nodes that want that data.

There is no assumption here that all data is relevant to all nodes. In a network setting, this might be less important, but if orchestration is expected to be across the whole of IT infrastructure (compute, storage, applications, and networking), then the types of information that are relevant will vary wildly from device type to device type. We need a lightweight means of sharing the relevant bits of information in real time so the systems can adjust.

Again, this all has to be in support of some overarching imperative – the application or service. It seems likely that higher level abstractions end up providing guide rails between which the rest of the infrastructure must operate. And then individual, semiautonomous (or even completely autonomous) systems can operate within those guide rails.

From a Plexxi perspective, this is why we are so keen to contribute an abstraction model to OpenDaylight. We believe that the abstractions must exist, and that they must be sharable across infrastructure but also across controllers. They provide a single source of intent. But beyond that, we fundamentally believe that there must be a means of sharing finer-grain data between elements operating in cahoots in service of that larger objective.

Whatever the eventual outcome, any communications methods need to be sharable and transferable across devices and vendors. The role of open source in networking (and IT in general) is critical to making sure everything can work together. Isolated islands in an SDN world largely defeats the purpose of a distributed, coordinated, and orchestrated environment.

[Today’s fun fact:  A quarter has 119 grooves on its edge, a dime has one less groove. Somehow I don’t think this is what they were talking about when Stella got her mysterious groove back.]

Thrive in the application economy with an APM model that is strategic. Be E.P.I.C. with CA APM.  Brought to you in partnership with CA Technologies.

Topics:

Published at DZone with permission of Mike Bushong, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}