This week I had the distinct privilege of presenting at the annual Linley Data Center conference in Santa Clara. As I walked through the parking lot of the Hyatt Regency hotel across from Levi's Stadium (where the conference was being held), deftly navigating around massive mounds of pungent detritus from what must have been a heck of a Super Bowl party the night before, I couldn't help but be overcome by a somewhat anticlimactic feeling. Yes, for a little while, America's greatest sporting spectacle took center stage and changed the definition of hero in Silicon Valley from geek to jock.
Yet it was but a fleeting moment, and once I got in, got caffeinated, and started to listen to some great technical presentations (by geeks like me), the moment was gone. As usual, the Linley folks (you know who you are) did a great job laying the foundation for the two-day conference, with keen observations on everything from the uptake of ARM processors (or lack thereof), to the potentially emerging trend of hyper-converged servers. The latter is something I must say I was confused about at first, but Jag did a great job of educating me, and now I understand the concept of combining compute, networking, and storage into a modular "appliance" that can scale at a much finer level of granularity than the traditional data center rack or pod.
My talk, titled "Open vSwitch Implementation Options," was a discussion of the challenges that are encountered when trying to implement server-based networking functions, such as virtual switching with network overlays with security, using a purely software based (i.e. x86) approach. At Netronome, we have collected a lot of benchmark data and metrics to compare software based OVS approaches in kernel and user space, to a hardware accelerated approach with our new Agilio CX server adapters. We have been able to show that with Agilio, we get a 5X throughput improvement, and at the same time we offload the x86 from doing OVS, which saves 50% of the server CPU resources, so the servers can actual return to running real applications. The net result of this acceleration and offload is that less servers are needed to perform a given amount of applications processing, and we have been able to show that this translates directly into substantial TCO savings of between 50% and 80%, depending on the application workload.
The day after I gave my talk, I listened to the keynote speech for day 2, from the Chief ARM Architect from Red Hat. He gave a great talk about the potential for ARM processors for server applications. No doubt we are all rooting for an alternative to x86 (perhaps almost as much as some were rooting for the Broncos), and he made a compelling case. But I was struck by one of the key points of his talk. Specifically, he argued that the TCO savings expectations for ARM to displace x86 were set unrealistically high at 10X. He felt strongly, based on feedback from data center operators, that 20% TCO savings was a sufficiently high bar, which if crossed, would justify the shift to ARM processors. At that point I wanted to jump up from my seat and put my hands together in the traditional "T" formation to signal a time-out! If 20% TCO savings is a game-changer, and with Agilio we can provide 50%-80% TCO savings just by dropping in our SmartNIC, and you don't have to make a huge commitment to move to a different processor architecture, isn't that a slam dunk (errr touchdown)?
In football, there are risky plays that have the potential for huge yardage, and safer plays that typically provide more modest gains. It seems to me that Agilio is a safe play that also provides a lot of yardage. What could be wrong with that? If you want to see for yourself, please try out our TCO calculator.