Networking’s Second Law
Networking’s Second Law
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
Sir Arthur Stanley Eddington was a British astronomer, physicist, and mathematician who once wrote:
The law that entropy always increases, holds, I think, the supreme position among the laws of Nature. If someone points out to you that your pet theory of the universe is in disagreement with Maxwell’s equations — then so much the worse for Maxwell’s equations. If it is found to be contradicted by observation — well, these experimentalists do bungle things sometimes. But if your theory is found to be against the second law of thermodynamics I can give you no hope; there is nothing for it but to collapse in deepest humiliation.
He was writing about the Second Law of Thermodynamics, but he could have been writing about networking.
The Second Law
In physics speak, the second law basically says that the total entropy in a system never decreases, and over time, systems will reach a state of maximum entropy. In networking speak, we can substitute complexity for entropy and derive a corollary: the complexity in a system never decreases, and over time, networks will reach a state of maximum complexity.
Basically, within a network, there is some complexity. Over time, that complexity only increases as new users using new applications driving new traffic add to the stress on the network. Any change made to or around the network will tend to add additional stress, driving complexity up. Strictly speaking, you can actually remove users, applications, and traffic from the network, so maybe the Conservation of Complexity is more principle than law, but in practice, it is rare that people prune complexity from active and well-functioning environments.
The source of complexity
In a networking context, the complexity of the system is based primarily on the number of inputs and the number of end states that are possible. Put more simply, the more things you ask the network to do, the more complex it will inherently be. As you add more policies and ACLs, for example, you are ultimately driving up the complexity in the system.
Intuitively, we know this. Provisioning a basic network with only a few capabilities is not terribly difficult. We only run into challenges when we start layering technology on top of technology, exception on top of exception. In the fullness of time, we end up with networks that resemble what we have today – sprawling beasts consisting of thousands of lines of configuration spread across dozens or hundreds of devices.
And every time we add something new? We just make it all more complex. Every feature, flow, or whatever creates a new set of interactions. In fact, the number of interactions grows as a factorial. So the pain actually accelerates over time in absolute terms (though as a percentage, it gets smaller).
What do we do with all this complexity?
Complexity has been growing for years. Interestingly, every new thing we add to the mix makes it worse. But what is our industry’s response to complexity? We believe we will make it better by adding more stuff. When things get too hard, we add another protocol. That protocol can then be applied to a number of tenants, flows, network links, whatever. Each time we apply it, we are adding yet another interaction—another end state—in the system. The complexity only grows.
I understand why this happens. It is easier to add something to take away the pain. But in constantly treating the symptoms, we have collectively allowed the underlying disease to grow – not just unchecked but actually with us feeding it to make it stronger. If you want an analogy from a different discipline, think about networking a lot like US tax code. The reason tax code is approaching 80,000 pages is that every time there is a loophole, we add more laws to close the loopholes. That new law opens up new scenarios to account for, which requires yet more law.
In these systems, complexity begets complexity. That’s where we are today.
Dealing with complexity
There are basically only two things you can do with complexity – move it around (essentially make it someone else’s problem) or remove it entirely.
Moving complexity around is an exercise in encapsulation. Imagine traffic engineering (think: MPLS, QOS, etc.). We can encapsulate the load balancing aspect of traffic engineering and have the system handle it for you. When we do this, we don’t fundamentally reduce the number of end states the system has to deal with. The overall complexity of the system remains the same. But how that complexity is handled changes. Essentially, what was once user complexity becomes vendor complexity.
Encapsulation is a great way to hide complexity, but the overall system will still devolve into a state of maximum entropy over time. Encapsulation is useful for helping to contain complexity, but it ultimately does nothing to remove complexity entirely.
The only real way to remove complexity is to actually reduce the number of inputs and end states for the system. That is to say that the only real way to control the unchecked growth of complexity in networks is to fundamentally do less stuff.
It is a scary proposition to do less stuff in networking. Networks are barely functional as they exist. Removing functionality feels like it is crippling further an already lame animal. But the path forward isn’t just rolling the clock back on networking and cherry picking a few features to back out.
Don’t think of the solution as choosing what to remove. The real architectural path forward is to start at least somewhat clean and then layer in what is required. This approach is somewhat counterintuitive. It will feel to some people that they are reinventing all the work that they spent the last two decades building. But it is a far more powerful task to set out to justify why something should be added rather than trying to concoct reasons for why it should be removed.
The simple act of working in the affirmative (what is needed to do something) forces us to think about fundamental capabilities. This is the only way to build a network that does what we need to, as opposed to the alternative designed to not hurt where it doesn’t have to.
Architectural resets are exceedingly difficult to execute. Pragmatically, if a project is an extension of an existing architecture, pulling capabilities out might not even be possible. This means that the precious few times where you have the freedom to make clean decisions, you have to attack them with vigor.
When people think green field, they think new datacenters. But new opportunities are not limited to just new concrete. Frequently, as new applications are deployed, it offers an opportunity to introduce new architecture. The challenge is having the discipline in these moments not to trot out the same requirements that have been perpetually added to for the last decade. If the objective is to remove complexity, make sure the discussions around what to include are rigorous. Every addition adds entropy to the system. It ought to be difficult to justify the addition of unnecessary complexity.
The bottom line
The most impactful aspect of SDN is that it offers people the opportunity to revisit their architectural decisions. If SDN gets relegated to a technology add-on on top of an already overburdened network, a huge opportunity to take a bite out of networking’s Second Law will have been lost. It’s rare that the industry climate provides air cover to make architecturally sound decisions. It is imperative that we make the most of this chance. It’s been a generation in the making, and we cannot know when it will be back.
[Today’s fun fact: The official gem of Washington is petrified wood. Not too bright, those folks.]
Published at DZone with permission of Mike Bushong , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.