Traffic advisory: your packets may be delayed
The Performance Zone is presented by AppDynamics. Scalability and better performance are constant concerns for the developer and operations manager. Try AppDynamics' fully-featured performance tool for Java, .NET, PHP, & Node.js.
This post comes from Marten Terpstra at the Plexxi blog.The past few years have seen a dramatic improvement in the latency in network switches. Single ASIC based switches can all pretty much switch packets in less than a microsecond. Current 10GE switching silicon provides anywhere from 300 to 800 nanoseconds, specialized silicon shaves that to less than 200 nanoseconds when limiting the amount of searching that needs to be done by reducing the size of lookup tables. Even other solutions play some smart tricks by providing forwarding hints for intermediate switches make those lookups take less than 50 nanoseconds.
Modular switches inherently have a higher latency. Line cards on modular switches typically have multiple ASICs, those ASICs are connected through a single or multi stage fabric. Each step takes time, resulting in latencies varying from around a microsecond when a packet stays on the same ASIC, to possibly 5-15 microseconds when a packet needs to travel through the fabric and back.
The speediest of ASICs achieve these low numbers by employing cut through switching. Cut through switching allows the ASIC to start transmitting a packet when enough of the header has been received to make a forwarding decision. The ASIC does not wait for the entire packet to be received (the more traditional store-and-forward mechanism), within the first few 100 bytes the forwarding decision has been made, and that same header (modified or not) is being transmitted out the destination port. It's somewhat odd to think that through, but the first bits of a packet may be received by the destination system before the last bits have left the first switch in the network.
Cut through switching comes with quite a few "buts". Most switches can only deploy cut through switching when the source and destination port are the same speed. 10GE in and 40GE out or vice versa is rarely supported and the ASIC will automatically switch to store-and-forward for those packets. For good reason. If a packet comes at you at 40GE rates, you simply cannot transmit it out a 10GE interface, that interface is not fast enough. In the reverse direction speed is not the issue, but if you were to employ cut through switching, for the duration of that packet your 40GE interface effectively runs at 10GE with lots of pauses in between pieces of a packet (figuratively speaking).
In addition, when the destination port has another packet being transmitted or in the queue, a new packet cannot be sent cut through. When another packet is ahead of you, you need to wait. And you may need to wait for quite a while. We often forget that it takes 1.2 microseconds to transmit a 1500 byte packet on a 10GE interface, more than 7 microseconds for a jumbo packet. When the destination port is being paused due to Data Center Bridging Priority Flow Control (PFC), the packet will be queued for store and forward. And make sure you add an extra 3 microseconds for 10GBASE-TX.
Datacenters are on a path to fewer layers of switching. Spine and leaf networks are being pitched as the best performing, low cost solution for dense networks. If you carefully examine the specs and pitches of some of the newer spine switches, you will notice that all of them make a case for deep buffers. Deep buffers assume that this switch needs to manage congestion by buffering packets, why else would you design expensive and power hungry buffer memory into those switches. Buffering and low latency don't go well together. If your spine and leaf network has nothing much to do, you may well see latency numbers of only a few microseconds or better. If the spine layer needs to buffer your packet, this number can jump up quickly to 10s of microseconds. And those large buffers seem to suggest it will.
There certainly are applications that are very sensitive to latency. Financial institution low latency trading networks are the example always used, and there are High Performance Computing environments with database, RDMI or similar applications that benefit from really low latency. Engineering the traffic in such a way that none of the low latency disruptive events described above happen is hard. Really hard. Extremely hard if there is a lot of traffic. Or a lot of endpoints. Networks that are specifically designed to aggregate and distribute (spine and leaf) will be more prone to these latency increasing scenarios. Creating a network with the ability to create isolated direct paths between switches that serve low latency applications is much more likely to avoid these. And even if the absolute latency is not the lowest, consistent latency with little jitter will certainly help the performance of adaptive mechanisms like TCP.