Netronome Web Logo_UPE9ULO.jpg

Agilio SmartNIC Support in Red Hat Enterprise Linux 7.5 and the Performance Impact This Can Have for Users

By Simon Horman | May 08, 2018
What are AgilioⓇ CX SmartNICs?

The Agilio CX SmartNICs are based on the Netronome NFP-4000 processor and are available in low profile PCIe and OCP v2 NIC form factors suitable for COTS servers. The NFP-4000 is a programmable 60 core processor with eight threads per core that transparently offloads and accelerates networking data planes. The flow processing cores have an instruction set that is optimized for networking. This ensures an unrivaled level of flexibility within the data plane while maintaining performance.

An implementation of the OVS datapath can run on the SmartNIC allowing very fast packet forwarding while leaving CPUs free to perform other tasks. 
What is OVS?
Open vSwitch (OVS) is a multi-layer virtual switch which is widely used to implement SDN. It allows flow-based forwarding using a rich set of matches and actions. These forwarding rules may be configured locally or remotely using OpenFlow. OVS provides a datapath implementation in the upstream Linux kernel which acts as a software offload accelerating a cache of flows. It also provides a user space datapath implementation which may be accelerated using DPDK, a configuration referred to as OVS-DPDK.

Recent developments in the upstream Linux kernel have allowed OVS-TC, a mechanism to facilitate hardware offload of the OVS datapath, to be developed to production quality.

What is TC?
The Traffic Control (TC) subsystem in the Linux kernel is commonly associated with the QoS mechanisms which it implements. And indeed OVS has, for many years, made use of these mechanisms to implement ingress policing and egress rate limiting on a per-port basis. There is, however, much more to TC than QoS.
TC allows a datapath to be described in terms of a packet classifier and actions. A classifier may extract packet data or metadata and allow rules to match on this information. The rules may have actions attached to for example: modify and forward or drop packets.

What is TC Flower?
TC Flower is  a TC classifier implemented in the upstream Linux kernel. It supports many of the packet data and metadata matches provided by OVS. TC Flower allows classifier rules to describe a software datapath, implemented in the Linux kernel; a hardware datapath, implemented on for example an Agilio CX SmartNIC; or both. These properties make TC Flower an ideal basis for providing hardware offload of OVS.

Fig 1
Figure 1: The Agilio transparent OVS-TC offload architecture

What is OVS-TC?
OVS-TC is a hardware offload mechanism for OVS which utilizes TC Flower to program flows into hardware. This has been implemented by enhancing upstream OVS to attempt to add flows to the TC datapath. By configuring OVS to only add such flows to hardware they are offloaded to hardware. Flows that cannot be added to the hardware, for example because they include a match or action that is not yet supported by the hardware, will be added to the OVS kernel datapath in the same way that would occur if this datapath was in use and hardware offload was not in operation.  
What are Representors?
Representor netdevs, or representors, are netdevs created to represent the switch-side of a port. When Flower firmware for Agilio CX SmartNIC is loaded the following netdevs are created:
  • A netdev for the PCI physical function (PF) to represent the PCI connection between the host and the card.
  • Representor netdevs for each physical port (MAC) of the card. These are used to allow configuration, for example of link state, of the port, to access statistics of the port and to carry fallback traffic. Fallback traffic are packets which are not handled by the datapath on the SmartNIC, usually because there is no matching rule present, and is sent to the host for processing.
  • A representor netdev for the PF. This is not currently used in an OVS-TC system.

When SR-IOV virtual functions (VFs) are instantiated, a representor netdev is created for each VF. Like representors for physical ports, these are used for configuration, statistics and fallback packets.

When using OVS-TC, it is the physical port representor and VF representor netdevs that are attached to OVS, allowing OVS to configure the associated ports and VFs and send and receive fallback packets.
Performance of OVS-TC
OVS-TC with Agilio CX SmartNIC can improve a relatively simple L2 forwarding use case by 2X. As we scale to real life use cases with complex rules, tunneling and using more flows, the Agilio solution provides an average improvement of 16X over the competition.

fig 2

Figure 2: 1-port Mpps performance comparison | VXLAN | Agilio CX - Intel XL710 | 2x40GbE

Results presented in Netronome’s “Virtual Switch Acceleration with OVS-TC and Agilio 40GbE SmartNICs” white paper compare OVS-TC and OVS-DPDK for an applied load of 40Gb/s for packet sizes ranging from 64Byte to 1518Byte with 1,000 up to 256,000 flows and 1,000 up to 256,000 rules (1:1). Packets are injected from the traffic generator at the network interface and into the datapath for both cases. The following graphs display the datapath performance for OVS-DPDK and hardware accelerated OVS-TC in a scale-up test using a single port.

Fig 3
Figure 3: Flow scalability 64B packet size test results for 1:1 rules/flows | PHY-OVS-PHY

As the number of flows and rules increases, the OVS-DPDK performance degrades dramatically. 

fig 4
Figure 4: OVS-DPDK performance drops as we scale to higher flow/rule count PHY-OVS-PHY | 2X40GbE

We can see from the results of standard OVS-DPDK software, that the number of flows has a negative impact on the frame rate in both iterations. As the number of flows increases, the frame rate keeps dropping. At 64Byte packet size, OVS-DPDK delivers 7Mpps on 64K flows but when scaled to 256K flows the performance drops by a factor of 1.75 to approximately 4Mpps. 

OVS-TC has very different performance behavior as we scale to more flows. At 64B packet size, OVS-TC delivers 33Mpps for 1K flows and as we scale to 64K flows the performance does not drop. When compared to OVS-DPDK performance for the same test we can see that OVS-TC running on the Agilio SmartNIC performs 2.5X better than the Intel solution.

fig 5
Figure 5: 1-Port Agilio 2x25G SmartNIC with OVS-TC versus Intel XXV710 2x25G with OVS-DPDK
PHY-VM-PHY | VXLAN | blast-100_flows-256036

For real-world data center conditions and larger number of flows and rules, OVS-TC with Agilio CX SmartNIC delivers 2.5X lower latency than OVS-DPDK with Intel NICs.

fig 6
Figure 6: 2.5X lower latency with Agilio CX
OpenStack Integration
During the OpenStack Pike cycle, the Nova and os-vif changes required for supporting representors and passthrough plugging were merged. During the OpenStack Queens cycle, the automation to manage the OVSDB flags to enable OVS-TC hardware offload were merged in TripleO.

Upstream in-tree OpenStack integration currently allows users to create instances with offloaded ports by specifying the 'Direct' vNIC type. With proper configuration, these instances can be booted on isolated cores, with exclusive access to the passthrough PCI device and proper NUMA affinity, in order to minimize latency and jitter.

All software components required to deploy OVS-TC are upstream. The minimum recommended upstream versions are:
  • Linux Kernel v4.15
  • Open vSwitch v2.9
  • OpenStack Pike
Flower firmware for the Agilio CX SmartNIC is also required and is present in the upstream Linux Firmware project. The recommended minimum version is:
  • Linux Firmware upstream commit 0783fb952fb3 (“nfp: add symlink for mixed mode Agilio CX 2x25GbE cards”).

The upstream OVS-TC solution is available in RHEL and associated Red Hat products. The breakdown of minimum versions by software component is as follows:
  • Linux Kernel and Firmware:  RHEL 7.5
  • Open vSwitch: Fast Datapath (FDP) 18.04
  • OpenStack: RHOSP13 (release pending)

The upstream OVS-TC solution is available in Ubuntu as 18.04 LTS.