Applications are at the center of the IT universe. As IT shifts its primary goal from connectivity to experience, it will require tighter collaboration between the various infrastructure elements that support application workloads. There are two philosophical approaches to how this orchestration might take place: through a tightly-integrated system, or through a more loose coupling of heterogeneous components.
But how should architects make the choice between these approaches?
The principles of architecture tend to be most vehemently argued by the vendors competing to sell the underlying solutions. IT vendors generally (and networking in particular) tend to turn these principle discussions into tit-for-tat FUD wars, arguing in absolution that one approach or another is the right way to go. But the ones who put their careers on the line when they select an architectural approach should understand more fully what drives specific architectural selections.
The difference between tightly-integrated systems and more loosely federated components is really performance.
Whenever two components come together, that boundary is defined by some interface. If you need to extract performance out of the coupled system, you have to make changes on one or both sides of said interface. As a vendor, if you can twiddle the bits on only one side, you can improve the overall system performance up to but not beyond whatever the other side can do.
So when performance is the primary objective, you will tend to see solutions where both sides of that interface are owned (or at least controlled) by the same party. The ability to make changes on both sides of the interface is the only way to maximize performance. When the primary objective is not performance, you will see a generalized interface that sits between a decoupled pairing of solution components.
Enter SDN. Or network virtualization. Or NFV. Or DevOps.
When we talk about performance as an industry, we usually mean capacity and speed. But performance is more than bandwidth and latency. The whole reason any of the SDN technologies is emerging is to satisfy operational issues. Getting applications provisioned, monitored, troubleshot, billed, upgraded, and so on has taken over the top spot on the pain list for many companies. The question we ought to be asking is what are the operational performance requirements.
The answer isn't black or white. What does performance even mean in an operational setting?
It seems at least plausible that operational performance translates to things like the rate of change (think provisioning changes per second or call setup and teardown rates, for example) or the rate of polling (queries per second, as with monitoring or billing). For some environments, it might be that the scale of configuration management or data querying is quite high. Any company that is doing fine-grained monitoring or rapid state-based network changes, for example, might have very high operational performance requirements. Meanwhile, most normal networks will likely have a much lower performance bar.
For the former, the objective has to be to eke out every bit of operational performance from the system. This will demand a more tightly-integrated solution. Both sides of the resource boundary (network and storage, as an example) might need to be within the same system, and the interface between them should appropriately be very specific to the implementation.
For the latter, a more generalized interface between infrastructure elements should be more than sufficient. The primary goal is not to maximize performance but rather enable collaboration between components. In these architectures, the generalized interface is the most important thing as it will optimize choice and flexibility between the individual system elements.
Both are absolutely valid use cases; there is no judgment in which is the more noble cause. But architects ought to be clear about what it is they are optimizing for. Selecting a generalized interface merely because it is open could be disastrous if it turns out that the performance requirements exceed what that interface provides. Conversely, selecting a tightly-integrated system might be more costly or limiting than is necessary if the real problem is orchestration rather than performance.
So where do architects start?
Everything starts with requirements. Is the objective to achieve a specific rate of change? Or is the objective merely to make tasks like provisioning and troubleshooting more coordinated across infrastructure silos? Are you planning to do anything exotic in terms of polling data on the system elements? Or are you expecting data to be accessed at a more casual rate?
The real point here is that architects should start to express their orchestration requirements in terms of both capability and performance. We do this instinctively when we think about how we move bits back and forth, or how we access storage, or how we allot cycles on a server. But when it comes to management, because our collective capabilities have been so lacking, we have ignored performance. As SDN and other technologies continue to advance, operational performance will take on a more important role. And without knowing what the requirements are, designers will really be flying blind, making tradeoffs that might not even be necessary.
[Today's fun fact: In ancient Rome, it was considered a sign of leadership to be born with a crooked nose. If Mike Tyson were born earlier, we'd call him Emperor.]