DPDK Networking Acceleration

Netronome® Agilio™ intelligent server adapters (ISAs) accelerate DPDK- (Data Plane Development Kit) based networking applications, increasing throughput (Mpps) and bandwidth (Gb/s) to host processors. By offloading compute intensive server-based networking functions and utilizing multi-CPU socket platforms more efficiently, performance can be significantly improved, while freeing up CPU cycles for additional applications processing.

Netronome’s ISAs support use of a DPDK based poll mode driver (PMD) which is designed for fast packet processing and low latency, eliminating the need for data to traverse through the Linux kernel and avoiding interrupt handling overhead for sending and receiving data to the x86 server.

Architecture

The DPDK is a set of data plane libraries and network interface controller drivers for fast packet processing. DPDK provides a programming framework for x86 processors to enable high-speed data packet networking applications. Netronome’s DPDK poll mode driver allows for high-performance user space access for packet receive and transmit functions, which in turn provides low overheads for userspace device access used in data reception and transmission.
TNetronome’s DPDK based driver is ideal for high-instruction compute node and service node applications such as cybersecurity and telecom wired and wireless infrastructure applications. The architecture is based on packet processing on Netronome’s Flow Processors, while using DPDK to execute other workloads on x86 processors. This lowers hardware costs, simplifies the application development environment, and reduces time to market. The DPDK workload model also plays a critical role in Software-Defined Networking (SDN) and Network Functions Virtualization (NFV).

Test Setup/Tools

Networking and security applications often have very high-performance requirements for throughput, latency and cycles-per-packet for their particular use-cases. DPDK is a kernel-bypass technology that brings the network stack into userspace, allowing network adapters to DMA directly into application memory. Compared to x86 applications with standard network interfaces, DPDK shows up to a 10X performance improvement. Testing consists of comparing traditional packet handling performance with standing Linux network devices (netdevs) with DPDK kernel bypass. Compared to standard netdevs, DPDK packet access techniques significantly increase performance while retaining open standards and not relying on proprietary implementations.

Benchmarks

Standard NIC cards without acceleration struggle with packet processing, which ties up valuable server CPU resources and creates a bottleneck that starves applications. Unique to the NFP architecture, up to four (4) PCIe Gen3 x8 interfaces can be used to transfer data to the x86 processors. As such, the Netronome Agilio Intelligent Server Adapters using DPDK can provide up to 200Gb/s of data throughput to the host.

ROI Calculator

Significant ROI savings can be realized by doubling DPDK throughput and can be load balanced to DPDK applications through Netronome’s unique multi-PCIe interface approach on Agilio LX ISAs. A maximum of ~40Gb/s can be sent to the host via a standard NIC with PCIe Gen3 x8 connectivity. With the Agilio LX, up to 80Gb/s can be intelligently load balanced to x86 applications instances reducing the number of servers required to meet throughout goals by a factor of two.

Calculate Your Savings

Documentation