Open networking is a hot topic these days. When I read about products and initiatives related to open networking, more often than not, the stress is on network switches. The industry has been hurt because in the past, network switches like top of rack (TOR) switches have been closed. Networking in commercial off the shelf (COTS) servers has been open, thanks to the proliferation of Linux server operating systems (OSs), and networking technologies like Open vSwitch (OVS). The industry wants the switch world to follow that successful trend; hence the birth and popularity of the term “open networking.”
Are the industry-wide open networking initiatives all about opening up switches? Not really.
Switches traditionally have been closed; that means the network operating systems and protocols that run on the switches have been proprietary, could not be disaggregated from the hardware and were not open source. First they were “really closed” switches when the switch ASIC, the software and the switch box were all from a single vendor and proprietary. Then, they got disaggregated a bit where the switch ASIC became merchant switch silicon (e.g. Broadcom). Next came OpenFlow and OpenFlow-based SDN Controllers (e.g. Floodlight) that proposed that the core of the switch OS and protocols be removed from the switch and placed in an open source controller. This in some ways disaggregated the OS from the switch box. Subsequently, switch operating systems came into the market (e.g. Cumulus) that are disaggregated in that they could install and run on merchant switch ASIC-based switch boxes from multiple vendors (e.g., Quanta, Dell). Such disaggregated switch OSes are not necessarily open source. More recently, open source switch operating systems (e.g. SONiC, Open Network Linux) have been in the news. The open source controller ecosystem has further evolved as well, focusing on feature completeness and carrier grade reliability (e.g. OpenDaylight, ONOS). All in all, significant action and news in the realm of open networking have been related to switches, geared toward helping the industry manage the switch supply chain more effectively and deploy efficiently, similar to how they are used to with COTS servers.
What seems to get overlooked in these discussions about open networking is the all-important precursor to this movement – open networking on servers. Most importantly, how open networking on servers or server-based open networking has evolved and enabled open networking on switches.
TOR switches have become simpler. When using leaf (TOR) and spine switches, the imperative has shifted to moving east-west traffic most efficiently, requiring more bandwidth and ports, and lower latency. As a result, the feature requirements in hardware and software in leaf and spine switches have reduced to a simpler set. This has made open networking in switches easier to implement and deploy. The smarts of networking however, did not disappear. They just moved to the server, where such smarts are implemented using the virtual switch – preferably an open one such as OVS – and other Linux networking features like IP tables. Many new networking features related to network security and load balancing have been added to OVS. OpenStack, as an open source cloud orchestration tool, has rapidly come to prominence, with more than 60% of OpenStack networking (with Neutron) deployed today using OVS. Even proprietary server operating systems like Microsoft Windows Server have evolved to enable more networking smarts on the server (e.g. the Azure SmartNIC and associated networking software). Server-based open networking has evolved relatively quietly compared to open networking in switches, but the contributions of the former in bringing deployment efficiencies and flexibility are paramount.
Today, in many high growth cloud, SDN and NFV applications, server-based open networking is running into server sprawl and related TCO challenges. At Netronome we take pride in being pioneers, bringing hardware-based efficiencies to server-based open networking. We are helping the case for efficient and flexible open networking from the COTS server side, where it all began.