“L’union fait la force.“ “Unity makes strength,” in French: this is the motto of Belgium, and incidentally it perfectly applies to Open Source software, too. The eighteenth edition of FOSDEM, the Free and Open Source Software Developers’ European Meeting, was held as always in Brussels, the capital of that country, in early February. For those who are unfamiliar with this event, this may well be one of the biggest meetings about Open Source software in Europe, with several thousand attendees. Alongside the main tracks, forty-two thematic “developer rooms” took place during the two days of the event. They covered a variety of topics from programming languages (Python, Rust…) to sub-domains of computer science (embedded systems, testing and automation…): there is something for everyone at FOSDEM! In particular, the last three editions hosted a “SDN and NFV” devroom, where many people working on programmable networks gather and talk about the latest trends in the domain. I had the opportunity to attend the presentations in this room, and even to contribute to the agenda by giving a talk. So let’s review together the contents of this track!
Over the years, as SDN gets wider adoption by the industry it requires more complete tools, more lightweight infrastructure. So we had a talk about OpenDaylight, solidly implanted as a SDN controller, that uses YANG, NETCONF and RESTCONF models to implement the control path. And we had presentations about orchestration solutions that can be deployed to handle the design, the creation and the management of virtual networking functions (VNFs) during all their life cycle. One was ONAP, gathering more than ten individual applications into a consistent framework. The other one was Ligato, focusing on cloud-native VNFs.
“Cloud-native” VNFs, emerging in cloud environments and run inside containers rather than virtual machines, are a very interesting concept. They show that SDN has to adapt to the evolution of containerized computing that is also gaining momentum in data centers. In this regard, there were additional talks about new platforms, new ways to do SDN. The first talk of the day focused on redesigning the VNFs as “data plane micro-services” in order to get more granularity, more flexibility. A collection of multi-process micro-services that could be arranged to decide if and how they would share CPU or I/Os. Allow this networking function to be greedy, not the other one. Other speakers also had their own vision of the future of SDN. One solution is to use plug-ins based on Virtual Distributed Ethernet (VDE) to get VNFs in namespaces on vanilla Linux kernels. Another one would consist in programming the logics of SDN directly in… the network protocol itself. This is the idea behind Segment Routing based on IPv6 (SRv6) for SDN: the segments used are made of a locator, but also of a function identifier and of its arguments. In this way, the next hop is deduced from a packet segment in the segment routing header, and once it has arrived there, the rest of the segment indicates what action must be applied to the packet. This makes it possible to create network overlays while keeping control of the underlay path, and to program networking flows and processing without having to store states on the machines.
These proposals for new SDN architectures are a consequence of an effervescent activity on the domain, which also translates into greater needs for tools, automation and quality survey. The presentations involving the OPNFV platform explained how the project is trying to answer this demand. One session was about the cross-community continuous integration process deployed by the project: they are able to compile every day new builds for the main Open Source networking projects, such as Open vSwitch, OpenStack, Kubernetes, OpenDaylight, and many others. The OPNFV also works on a Barometer project, which can be used to efficiently monitor VNFs to react to faults, to evaluate system performances, and to be able to enforce Service-Level Agreement (SLA) policies.
But beyond continuation integration and monitoring, the transformations that SDN undergoes are mostly sustained by strong efforts from many companies and individuals to make it scale better, to manage resources more efficiently, and in the end to push even more packets through the pipe. Yes, it is time to talk about the data plane!
One of the highlighted models was VPP (Vector Packet Processing), at the heart of the FD.io project, that is based on DPDK to offer fast user space processing of vectors of packets as defined by a flow graph organizing the various plug-ins available natively or defined by the user. The two presentations would both introduce VPP, the first one from a functional point of view, while the second would instead focus more on the needs for this architecture (basically, SDN getting too complex) and calling for contributors.
Two developers of Snabb (previously Snabb Switch) also had a recommendation (?), although a different one: they ask the NIC designers to provide the specifications for the interfaces required to create the drivers for the cards in order to enable third-party implementation. And they explained how they did just that for Snabb, with the use of LuaJIT, resulting in a fast virtual router. While we are at specifications for driver, we also talked about the version 1.1 of the Virtio standard, soon to land, that defines an easy mechanism to provide virtual devices to guest operating systems. In comparison with version 1.0, the new version focuses more on hardware implementation and overall performances, adding mechanisms such as virtqueues to gain about eighteen percent additional throughput. Several use cases for Virtio were also showcased, such as vPDA (virtual Datapath Accelerator), Vhost-PCI for fast VM-to-VM communication, Virtio-vhost-user (different approach, same goal) or transparent bonding for SR-IOV devices.
Virtio alone does not solve all problems for accessing the hardware, though. A very interesting presentation by Linaro explained how mediated devices, recently introduced in Linux for GPUs but with provision for additional devices, could be used as an interface between kernel drivers and user space frameworks such as DPDK to improve packet steering. This comes from a need to obtain much better performances for latency and jitter for time-sensitive applications that what the Linux networking stack allows. And the result of their efforts is called net_mdev, which is expected to support “zero copy” for packets, to heavily rely on IOMMU, to require no user space drivers but to remain compatible with the kernel stack for some use cases. This is still a work in progress, and some parts of the design have not been fixed yet. It could be based on the new AF_XDP socket family.
This leads logically to AF_XDP, which had its own introduction. “AF_XDP” is in fact the new name for the “AF_PACKET v4” that was first presented at Netdev 2.2. Since then, it has been reworked and gained performances on most scenarios (it did lose some performance on some cases, however, but the work is still in progress as of this writing). This mechanism relying in part on XDP programs to redirect some of the traffic is expected to drastically improve Linux performances when it comes down to sending packets to user space, while still being able to process some of the packets in the kernel. A lot of expectations about this new socket type come from everyone working on networking performances at the moment. That was probably the most successful talk of the day regarding attendees, as the room was packed and I think some people could not even enter—as is too often the case at FOSDEM. Anyway, we are also impatient to see the final version of this work, and it will be interesting to study how offloads can work for the best with this mechanism.
Speaking about offloads… Of course, there was our own presentation, resolutely focusing on data plane acceleration, and one of the most low-level for that day. Our session was entitled, “The Challenges of XDP Hardware Offload,” and this was an effort to present the current state of our work to offload eBPF programs from Linux hosts to the network device. It was not so different from the talk that we did during the last Netdev conference, but we gained support for additional eBPF features in the meantime—we are working hard, and making fast progress!
When compared to Netdev, FOSDEM was for us an excellent occasion to present our work to a new, possibly less technical public, and most of all to make it very clear to everyone that we are deeply committed to Open Source. Just as I explained during the conference, our driver, and all work done on the Linux side to enable eBPF hardware offload has been pushed upstream and is available for anyone willing to reuse it or to contribute to the project. So all can take part in improving this super-fast data plane working in cooperation with the kernel!
Quentin Monnet is a senior software engineer at Netronome, where he works on advanced technologies at the driver level, such as the software pieces of eBPF offload, and modestly contributes to several open source projects. Before joining Netronome, he was part of the Research & Development team at 6WIND, and was involved in European research projects that focused on fast, flexible and stateful packet processing for SDNs. Quentin holds a Ph.D. in Computer Science from Université Paris-Est, France.
Bapi Vinnakota taught at the University of Minnesota, Twin Cities, immediately after a Ph.D. at Princeton. He joined Intel through an acquisition and was the architect of a flow processor for VoIP applications. At Intel, he was a Principal Engineer and worked in wired, wireless networking technology, university outreach for the IXP program, cloud computing and incubated a networking SaaS product. More recently, he has been exploring algorithmic trading on the stock markets, developing and analyzing trading strategies in Python and R. He has published 50 refereed journal and conference articles, is an author/co-author on 10 patents and edited one book.