The Network Infrastructure Matters

By Netronome | Nov 14, 2017

At SDN NFV World Congress in The Hague I was asked a very interesting question. An engineer from Corning was walking the exhibition floor looking for people to connect with on a business or technical level and he was having a hard time. (For those who don’t know, Corning makes more than the glass used on our Smart phones. They also make the fibers used in fiber optic cables – those things that transmit the bits to and from our servers.) He was noticing that most people and nearly all companies at the show are about application layer software. Very few people got down to the level of glass quality in optical cables, so he asked, “Does anyone here care about the cabling that transmits their data?”

My answer was convoluted, as many of my answers tend to be. I should probably work on that.  I said, “No, most people don’t get down to that level of infrastructure. However, if their data - be it big data, little data, any data - is not flowing, they get really interested, really quickly.”

It’s a challenge that all infrastructure providers who aren’t Intel face, and maybe we need to step up our game. Application software is what gets the most attention. After all it’s closest to the consumer, but it’s often what’s behind the scenes that has the most impact on the user experience. Data needs to flow to and from a server, and for that to happen efficiently is has to happen over an optimized network.

The key element to an optimized network used to be the switching infrastructure, but software-defined networking has changed that. Now, compute servers are critical elements of the network. It is well understood that a software solution, in this case networking, can run on a general purpose CPU. The challenge is that networking is a specific task with a different processing model, and general purpose CPUs are not optimized to process network traffic. Hosting millions of network flows and processing network functions like match-action table lookups does not lend itself well to a general purpose CPU’s memory structure.

A table lookup is incredibly fast when that table is in local memory. It’s much slower when that table is not in the local cache. With Agilio CX SmartNICs, we provide 2GB of DRAM per network card. This card memory enables Agilio SmartNICs to support up to two million concurrent network flows. Table lookups are nearly instantaneous, which results in faster network transactions and significantly decreased latency as the recent “For Google Networks, Predictable Latency Trumps Everything” article about Google’s network by Timothy Pricket Morgan noted.

Let’s take our discussion back to infrastructure. Whether the top-level application is machine learning, search or ad serving, it is network latency/performance that enables application performance.

For reference – take a look at the recent demo that we created in partnership with Qualcomm. A traditional server running software-based networking provides only 25 percent of the network bandwidth and 2X worse latency than a server utilizing network acceleration and offload with Agilio CX SmartNICs.

Infrastructure matters. Even though I didn’t have a good answer for the guy from Corning, infrastructure definitely matters.