Over a million developers have joined DZone.

Network cost and complexity: As simple as changing the y-intercept?

DZone's Guide to

Network cost and complexity: As simple as changing the y-intercept?

· Java Zone ·
Free Resource

Java-based (JDBC) data connectivity to SaaS, NoSQL, and Big Data. Download Now.

In a previous post, I wrote about the incremental nature of innovation, particularly in the networking space. My point was that innovation (both from a product and a deployment perspective) occurs from a frame of reference. Understanding that foundation is critical to determining strategies, especially around go-to-market and adoption.

From a network user perspective, the incremental nature of architectural evolution means that users are far more likely to adopt something that they can identify with. To the extent that new capabilities can be framed up relative to existing deployments, migrations are easier. This is actually a healthy dynamic as it creates a bit of architectural longevity. Indeed, it would be incredibly difficult to operate in an environment that is perpetually in flux.

What triggers architectural change?

But architectural changes do happen. Understanding why can be helpful in planning for them, and ultimately for evaluating what to migrate to.

I’ll make the assertion here that cost and complexity are at least correlated (if not causal). As complexity increases, the cost of managing that complexity also increases. Complexity drivers can be as simple as the number of devices in a deployment, the ease with which those devices are connected, or even the sophistication required to perform activities like traffic engineering. Whatever the cause of complexity, as it goes up, there is a correlated increase in effort (time or money).

The appetite for making architectural changes gradually increases until either the cost or the complexity threshold is exceeded. Once either the economics or the ease of use (frequently seen as service agility) limits are reached, it is generally time for a new approach.

It’s easy to pick on Cisco in the cost and complexity game. Their legacy platforms have gone through years of development abuse. Piling on feature after feature ultimately results in a chassis bursting at the seams because of software that has grown increasingly bloated over time. The sheer number of lines of code in their legacy software makes it unwieldy at best.

Cisco would probably even agree with this characterization (unless you ask a sales guy about to close you on an aging platform). This is why they have spawned new product lines over time, based on new supporting infrastructure. It’s all quite natural really.

Increasing cost and complexity creates opportunity

If we look at what Arista did several years ago, we can see the impact of shedding the extra weight of an aging product. By being aggressive in pursuing merchant silicon, they shed a big contributor to their own complexity, which allowed them to drop prices quite a bit. Then they started clean with their software, EOS. It is much easier (both cheaper and faster) to build products unencumbered by backward compatibility requirements for features that border on prehistoric.

The drop in complexity brought an immediate reduction in cost (both capital and operational). For users who had hit either the cost or the complexity barriers, deciding on a new platform could be relatively easy.


But fundamentally, is the slope of the cost-complexity line any different?

The most interesting question for network architects and operators to consider is not whether or not a solution is lower on the cost-complexity line but rather is the slope of that line any different? If after incurring the expense and effort of an architectural transition you are fundamentally on the same line, all you have really done is change the y-intercept. This is akin to kicking the can down the road. You will still reach the same cost-complexity threshold, at which time you will need to make the same transition. Only this second transition will be more expensive because it will be at higher scale, which will push upward both the effort and the impact.

The goal ought to be to extend the life of the architecture, preserving your investments in training, process, tools, and integrations. But for this to happen, you need to look beyond just the y-intercept. If a solution looks and feels the same as its predecessor, is the slope of the line really different?

This creates an interesting strategic dynamic. Adoption is easiest when things are equivalent. But value will be greatest when there is real innovation. As a vendor, how do you strike the right balance between the two? Lean too much on the first, and you get disrupted by everything around you (Arista leaning on price and then having white box switching come along, for example). Lean too much on the second, and you might see slow adoption (SDN anyone?).

It’s impossible to strike a perfect balance, so where do you lean?

What to look for

Architects and operators should ask this question to see what kinds of answers come back. An honest answer should lead to follow-on questions. If the solution is largely the same as the incumbent, what are you doing about changing the slope of the line? If the solution is different, what are you doing to ease migration?

An honest dialogue around these topics can be difficult in sales settings, but it is absolutely essential if you want to do something more than just changing the y-intercept.

Connect any Java based application to your SaaS data.  Over 100+ Java-based data source connectors.


Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}