At the recent Open Compute Summit (OCP) and the Open Networking Summit (ONS), the overarching themes were “data center hardware must be commodity” and “software continues to eat the world.” The OCP prides itself as a pioneer, having introduced the notion of open hardware with the goal to “commoditize” hardware. Large data center operators (Facebook, Microsoft, Google, and Rackspace) have contributed hardware rack designs to the OCP. Server and switch vendors have contributed data center server and switch designs respectively. Network interface card (NIC) designs for OCP-compliant servers have been developed, and designs contributed. Multiple data center switch related open software and API specifications have been proposed or contributed at these two forums, by cloud and telco service providers as well as vendors. The data center switch was seemingly the last bastion - until recently, it has long been considered a black box and “not commodity.” Therefore, the impetus has been to disaggregate it, make it like the server, where the hardware is commodity, the software that runs on it is separate, comes from another source and preferably open.
Let’s look at some of the relevant definitions of commodity: an economic good; a mass-produced product; a good or service whose wide availability typically leads to lower price. Silicon and hardware system vendors continue to invest in R&D bringing products that scale more efficiently – recent innovations relate to 3.2Tbps TOR switches and the introduction of 25GbE in NICs and switches. These relate to making data center networking more economical, in line with the commodity doctrine. This is a new world that hardware vendors have to find a way to thrive in.
There used to be a place for specialized networking hardware. There used to be a place for tightly integrated networking software and hardware that provided special values. At venues like the OCP and ONS, they are a thing of the past.
Data centers are more sophisticated today than they were in the past, so those values could not have just evaporated. If one looks under the hood at how large and more efficient data center infrastructures are designed today, it becomes clear that the use of such values has actually increased. The implementation of such values has changed and the place where they are implemented has shifted. The networking switch infrastructure has become simple allowing for white box commodity switches with simpler disaggregated software to become viable. Complex and expensive middle boxes (firewalls, etc.) are similarly fading. The implementation point for all such values now is the server. Server-based networking has taken a stronghold. As traffic patterns, workloads and security requirements have increased, how networking is done in the server is evolving.
At OCP, Microsoft discussed their design of a SmartNIC based on FPGAs that leads to higher scale and efficiencies in the Azure cloud network. At ONS, AT&T discussed the need for SmartNICs for better observability leading to higher service levels. The place where values are delivered is certainly shifting – to the server, but does that mean specialized networking hardware on the servers? Netronome’s Agilio SmartNICs lower TCO dramatically and follow the doctrine of economics and commodity, a version of its open SmartNIC adapter design
http://www.opencompute.org/wiki/Server/SpecsAndDesignsis available from OCP.