10 Myths about SDN, NFV and Data Center Switches: Debunked: Part Five

By Netronome | Apr 06, 2017

In this blog, I continue debunking myths about the role of data center switches (implemented in both hardware and software) in SDN and NFV deployments.

Myth #5: Networking data paths implemented in servers – as used in SDN and NFV - can be accelerated using DPDK, enabling high performance and server efficiency.

Intel created the Data Plane Development Kit (DPDK), so I would say it is apt to start with the Intel DPDK-related website when we discuss DPDK. It claims that DPDK can improve packet processing performance by up to ten times. DPDK software running on Intel® Xeon® Processor E5-2658 v4 achieves 233 Gb/s (347 Mpps) of L3 forwarding at 64-byte packet sizes.

We have not verified the L3 forwarding claims. When it comes to SDN and NFV, the virtual switch data plane – example with Open vSwitch (OVS) - and related packet processing performance is most relevant. Our benchmarking with SDN and NFV related use cases shows that using OVS running in user space utilizing DPDK can result in two to four times improvement in packet processing performance. With fewer policy rules, the improvement is from about 5 Mpps to 18 Mpps. With a large number of policy rules, the improvement is from about 3 Mpps to 11 Mpps. OVS packet processing uses up 12 Xeon CPU cores, leaving the remaining 12 Xeon CPU cores available for applications and VMs in a typical data center server. In summary, DPDK does improve performance for SDN and NFV, but not in the magnitude claimed on the Intel website. And, it does not help server efficiency with half the CPU cores wasted doing networking instead of processing revenue generating applications and VMs. As the figure below shows, offloading OVS packet processing to a SmartNIC (in this example the Netronome Agilio CX SmartNIC is used) can boost the performance to closer to 30 Mpps while restoring server efficiency by returning 11 Xeon CPU cores back to revenue-generating applications and VMs.

Forwarding to 8 VMs with 64K Flows

Networking data plane implementations in servers as used for SDN and NFV are changing – improving significantly as the Linux kernel and open source communities actively contribute. Specific to OVS, the data plane implementations are evolving for the better. This is shown in the figure below as represented in a recent presentation to the OVS community.

Datapath Implementations

The OVS data plane is implemented in the kernel (Linux OVS) and User Mode. The open source community is actively working on utilizing existing and proven Linux kernel data plane mechanisms like TC flower and eBPF to implement the OVS data plane. One of the core motivations is improving SDN and NFV performance and efficiency by enabling kernel-based offloads of the data plane (using TC flower and/or eBPF) into hardware such as SmartNICs. This is shown in the figure below.

Flow offloads

Netronome has implemented the Full OVS offload option; the benchmarks shown above are based on this solution. Netronome is actively working on TC flower and eBPF offloads and contributing to the kernel community. These implementations are expected to provide SDN and NFV performance and efficiency benefits similar to the full OVS offload implementation. Since TC flower and eBPF are generic match-action processing implementations in the kernel, they have broader applicability beyond OVS.

It is worthwhile to mention in this context that FD.IO is another relevant initiative. It is another data plane implementation that aims to replace or bypass the above kernel stacks. DPDK is being applied to the FD.IO initiative as well, similar to how it was applied to OVS. These initiatives compete against the advances from Linux kernel communities, and based on historical precedence, such initiatives have not succeeded in mainstream applications and high volume deployments.

In summary, the industry should align and work with the Linux kernel community to enable higher networking performance and boost server efficiency in SDN and NFV applications. DPDK is a great innovation; however, in this case, it is more of a Band-Aid, short-term solution that does not help server efficiency and is misaligned with respect to the work and innovations in the Linux kernel community.

Myth #6: Moore’s law is alive and well, so using more CPU cycles for network switching and routing in the server is prudent

Stay tuned for part 6 in this series.

Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked: Part 4" by Sujal Das.
Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked: Part 3" by Sujal Das.
Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked: Part 2" by Sujal Das.
Read the Blog, "10 Myths about SDN, NFV and Data Center Switches: Debunked" by Sujal Das.