When people talk about pricing and switch sizes today, answers are usually framed up in some derivative of the per-port discussion. If you ask a vendor how much a switch costs, they will typically report back some price-per-port number. The term has become the de facto standard on how we talk about switches.
Forgetting for a moment that everyone is using more or less the same switch silicon, how companies frame up things like performance and price varies from switch to switch. You would think that with the entire world centered around Broadcom, the industry would at least settle on standard terminology and measurements.
That hasn’t happened, and the result requires a far too nuanced discussion to compare apples to apples. Vendor A might include things like transceivers in their numbers. Maybe vendor B doesn’t include the cost of the cables. Perhaps another vendor quotes the total Broadcom ports, mixing both access and uplink ports. Whatever any individual vendor’s practices, collectively we all need to arrive at a much more precise discussion over the coming years.
The problem is not just with pricing by the way. When the discussion gets muddled, it becomes more difficult to talk about things like the number of usable ports, cross-sectional bandwidth, and the like.
Now consider that as IT evolves, these numbers so too will evolve. Today, most Broadcom switches in high-performance data centers operate as either 1:1 or 3:1 oversubscribed. That is to say that for every Broadcom chip, there are a number of ports the chip supports. Trident-II, for example, supports 96 ports. When we talk about 1:1, we just mean that there are 48 access ports and 48 unlink ports from the chip. How those get plumbed to the physical ports on the device is another matter. Similarly, a 3:1 oversubscribed switch typically means there are 72 Broadcom ports that tie to the access ports, with the remaining 24 ports being used for uplinks.
Why does this matter?
Architecturally speaking, we are going through some serious changes. The migration from 1GbE to 10GbE and then to 40 or 100GbE means that the growth in the total number of Ethernet switch ports in the datacenter is tapering some. When you combine that with the move to wireless in a lot of cases, this creates strong downward pressure on the total number of ports.
Don’t mistake this for meaning that data centers are not growing. They are still experiencing substantial growth, driven in no small part by the high amounts of east-west traffic resident in most environments. It’s just that the growth is less on the port side and more on the traffic side.
The implication here is actually interesting. When people ask about the price-per-port today, they are typically really asking: How much does it cost for each new server I add? This question makes sense because networking isn’t leading the buildout; it is supporting some decision that has been made on the compute side (likely in support of an application requirement). If your frame of reference is supporting servers, then you will want to know quite naturally how much switching costs on a per-server basis.
But this all changes if the thing that server expansion drives is not just access ports. If, instead, the relevant network characteristic that has to be addressed is east-west bandwidth, then normalizing the discussion around the number of access ports is not helpful. This is already happening, which is why some vendors include total Broadcom ports when they quote numbers, but because the discussion is not generally precise, this makes things unnecessarily confusing (especially when comparing numbers).
If the future is about interconnectivity, what you might see is a fairly substantial change in how the available Broadcom ports are allocated. Imagine for a moment if the ratios were reversed. Instead of 72 access ports and 24 uplinks, what is you had 24 access ports and 72 uplinks? Remember here that I am talking about Broadcom ports, not necessarily physical supports. You could aggregate the 72 interconnect ports into a small number of physical interconnect links. This is some of the work that is being done with flattened butterfly topologies.
If you think about how data centers might change, this gets really interesting. What if racks evolve so that you get a bunch of high-density compute and storage resources in a rack, connected through some high-speed internal links (think: PCIe). In this scenario, the role of the switches is less to connect to all the servers and more to provide a high-bandwidth connection between racks of resources.
In this scenario, it could very well be that the most meaningful use of switching is less about getting to and from the server and more about getting back and forth between racks. This starts to look a little bit like transport between data centers, except that the confines of the resource pools are marked more by the racks than the walls within which they reside.
It’s hard to say when or even if this type of future comes to fruition. But if these types of changes start to take root, then the definition of “port” and all the derivative “per-port” discussions will need to get a whole lot more precise.