In this blog, I continue debunking myths about the role of data center switches in SDN and NFV deployments.
Myth #3: Nicira was one of the early implementers of OpenFlow and this is a key reason for their successful exit – touted as the first big one in the world of SDN.
Innovators at Nicira and in the Linux open source community created Open vSwitch (OVS). It implemented a networking switch datapath compliant with evolving versions of OpenFlow specifications defined by the Open Networking Foundation (ONF). The implementation is in software that runs as part of server operating systems and includes both kernel and user mode components. It also includes northbound interfaces to enable OpenFlow compliant SDN controllers such as OpenDaylight to configure the OVS datapath.
Nicira’s success, however, as most of us know, was due to their product innovation and positioning related to network virtualization. Nicira used or innovated in technologies such as GRE, VxLAN and STT. GRE and VxLAN support were added into the OVS datapath to ease customer deployment of network virtualization. In fact, VxLAN was pushed by large companies such as Cisco and VMware. Network switch vendors implemented VxLAN and GRE Virtual Tunnel End Point (VTEP) capabilities in their hardware switches to enable easy integration with existing VLAN-based networks, and to resolve performance related issues. During this time, Microsoft and its partners introduced NVGRE as a competing technology to VxLAN. Nicira created STT as an alternative, but it has not been accepted by the industry; it requires stateful processing in network switches – a difficult thing to do when implementing a VTEP in a switch.
Nonetheless, Nicira was a first mover with a complete network virtualization solution that included their flagship NVP (Network Virtualization Platform) SDN controller. The company was touted as bringing virtualization to networking, similar to the way VMware brought virtualization to servers. The fundamental technology building blocks were support for tunneling technologies such as GRE, VxLAN, and STT, and flow processing (match and action) capabilities using OVS, and their centralized control using the NVP SDN controller. The rest is history. Along the way, Nicira downplayed OpenFlow to the point where its co-founder and the co-inventor of OpenFlow, Martin Casado, acknowledged that Nicira got OpenFlow wrong (“The problem is, we actually got it – OpenFlow – wrong, and I think a lot of the industry hasn't realized how wrong it was…”).
Today, OVS is widely used; more than 60 percent of OpenStack deployments use OVS for virtual switching and networking services. OpenFlow-based northbound interfaces continue to be used to integrate with SDN controllers. However, a bulk of the OVS innovations are no longer tied to or follow OpenFlow or the work conducted in the ONF. Recent innovations relate to enhancing flow processing performance using the kernel components more efficiently, or using user space datapaths with DPDK, addition of flow-based load balancing; new tunneling protocols such as GENEVE and MPLS; MPLS over GRE; integration of Linux Connection Tracking for stateful security, and many others. To facilitate faster evolution of the kernel networking datapath, the OVS community is considering technologies such as P4 and eBPF.
So yes, Nicira is credited with early innovations related to OVS, and OVS today is a widely used open source technology for server-based data center networking. Nicira brought a compelling network virtualization product to the industry – the Nicira NVP - but OpenFlow was not a significant part of that offering. While the datapath implemented by OVS is based on the OpenFlow specification, the marriage between OVS and OpenFlow has not lasted; OVS has found use in many broader applications, including with OpenStack, and recent innovations in OVS have deviated further away from the OpenFlow-related initiatives conducted in ONF.
At Netronome, we are closely following and contributing to the evolution of OVS and its tight integration and use with OpenStack, while bringing the performance efficiencies of hardware in server-based networking using COTS servers.
Myth #4: In the largest data centers, SDN and NFV and related efficiency benefits are being realized through significant feature innovations in high density networking switches.
Stay tuned for part 4 in this series.
Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked: Part 2" by Sujal Das.
Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked" by Sujal Das.