OpenFlow 2.0 Could Bring New Flexibility to Switches
Join the DZone community and get the full member experience.Join For Free
OpenFlow 2.0 doesn’t formally exist yet, but one possible shape of the protocol — a more flexible take on packet switching — is starting to form.
A research paper outlines the idea and sums it up nicely in one sentence: “We believe that future generations of OpenFlow should allow the controller to tell the switch how to operate, rather than be constrained by a fixed switch design” (emphasis in the original).
In other words, what’s being proposed is a new type of switch, one that’s configurable in ways that aren’t possible today.
You can see some of the implications at this week’s Open Networking Summit (ONS). A poster session presented by Stanford University with Barefoot Networks and Intel investigates ways to populate the router/switch tables in this new model.
The abovementioned research paper, “Programming Protocol-Indpendent Packet Processors,” was published in December and updated this week. Its authors include some key names in software-defined networking (SDN) including Nick McKeown, the Stanford professor who helped spawn SDN and OpenFlow, and Jennifer Rexford, a Princeton University professor.
Amin Vahdat, a Google distinguished engineer, is listed as well; he’s also a Wednesday morning keynoter at ONS. Intel, Microsoft, and startup Barefoot Networks contributed to the paper as well.
Note that nothing in the paper has formally been blessed by the Open Networking Foundation (ONF) as the official OpenFlow 2.0. But it’s an interesting direction that could help future-proof OpenFlow — assuming switches emerge that can support it.
Field Explosion in OpenFlow
The problem they’re hoping to solve is the shifting nature of the OpenFlow protocol.
Consider the packet header. OpenFlow checks certain fields in the header (the destination IP address, for example) and looks for a match in its flow tables. Having found a match, OpenFlow takes the action dictated in the table.
But the number of fields OpenFlow checks keeps increasing, from 12 with OpenFlow 1.0 up to 41 with OpenFlow 1.4. The spec keeps getting more complicated as it gets extended into different use cases.
What if things could work the opposite way — with the hardware changing, instead of the protocol? “Rather than repeatedly extending the OpenFlow specification, we argue that future switches should support flexible mechanisms for parsing packets and matching header fields — and allow controller applications to leverage these capabilities through a common, open interface (i.e., a new ‘OpenFlow 2.0′ API),” the paper reads.
The new breed of switch, called a programmable, protocol-independent packet processor, would be configured using a high-level compiler — the authors propose the name P4 — where users could specify the rules to put into the tables. P4 is also where the logical dependencies between tables would be defined. (If Table 2 has some entries that call for moving packets to either Table 4 or Table 5, then those three have a logical dependency.) P4 would take all that information and tell it to the control plane, which would use OpenFlow 2.0 to implement the rules in the switches.
This would allow for the creation of table types that OpenFlow currently can’t handle. It also means normal Ethernet switching processes could be upended — you could order the switch to look up the destination MAC address before looking up the Ethertype (it’s normally done in the reverse order).
The result is that the switch could adapt to new types of switching, whether it’s OpenFlow 2.0 or something crazier.
Such switches don’t exist yet — and that’s apparently where Barefoot Networks comes in. The startup, rumored to be working on Ethernet chips, might be trying to build this fully reconfigurable switch.
Stanford: Doing the Math
The Stanford presentation involves a specific aspect of OpenFlow 2.0, namely, how to allocate the tables among a group of switches optimally. They haven’t yet gotten to a point of implementing their work on the physical switches, but based on what a researcher told me Monday at ONS, they’ve built a solid basis for the mathematics of the process.
The question they’re researching is how to best assign these newly flexible tables into switches, taking into account issues such as memory. They’ve settled on integer programming as the means to do this. (A simple, greedy algorithm didn’t work, so yes, the problem is worth researching.) The poster presentation explains their results so far.
They’ve brought their work to Intel, but there, they’ve got a whole other set of challenges. For example, Intel’s FM6000 family of switch chips, also called Alta, support table flows that aren’t strictly linear. Stanford’s algorithms assume a straight pipeline of switches — in other words, traffic goes from Switch 1 to Switch 2 to Switch 3 and so on, in a straight line. Intel has a flexible pipeline where multiple steps can exist at once, and different stages can use different memory types.
Published at DZone with permission of Craig Matsumoto, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Is Podman a Drop-in Replacement for Docker?
Competing Consumers With Spring Boot and Hazelcast
RBAC With API Gateway and Open Policy Agent (OPA)
Observability Architecture: Financial Payments Introduction