…And you are right back to where you were when everything was bare metal, and specific servers ran specific applications.
Let’s go back a few years. More like a decade or so, when VMware was only used in Test/Dev environments and pundits predicted that data centers would soon be 30 percent, 50 percent, or maybe even 75 percent virtualized. What was the promise of virtualization? It was that applications, contained in virtual machines (VMs), could be ported across servers as needed. Data centers would no longer need to be over-provisioned for worst-case scenarios, any application could run on any server, and we could adapt to demand on the fly.
In many cases that is true, but different challenges have arisen. In specific instances, networking has been the challenge. We all know that different applications have different networking needs, and different data centers offer different solutions to their customers. A typical cloud data center needs to be incredibly nimble – allowing customers to stop and start applications at will without mandating that customers install specific drivers or configure their applications to meet the cloud provider’s infrastructure. Flexibility is king! Also, in these cases, the cloud providers migrate applications as needed to best utilize available resources. Network engineers use Virtio to support these customers. Performance is low but flexibility is extremely high, and nothing needs to be configured in the application.
Some customers require high network performance. Maybe it’s a Telco VNF supporting thousands of users with 10s of Gigabits of concurrent traffic. In this case, the network infrastructure needs to be architected for performance, not flexibility. For these customers, network engineers use SR-IOV to bypass the kernel and drive network traffic directly to the customer’s VM. Unfortunately, SR-IOV, though it provides tremendous performance, requires that the VM has SR-IOV drivers that are specific to the network interface card (NIC) installed in the server where the VM is running.
Applications, like a big data processing VMs, have moderate network processing requirements and require some flexibility to migrate across servers. This is where DPDK comes into play. The DPDK poll mode driver, which operates in user space, is hardware independent, enabling features like live migration. However, networking services may be limited considering they need to be ported to user space.
Granted, this is a high-level summary of available networking options, but this is a blog post, not a white paper. We get much more granular in our white paper.
What we have just described is a scenario where your 100 percent virtualized data center is still siloed. Applications with specific network requirements can only be run in VMs that are tied to specific servers. Depending on how balanced your customer’s workloads are, their VMs may not be able to utilize 67 percent of your data center.
The solution is to add an abstraction layer to the network data plane. Netronome Agilio SmartNICs do just that. Agilio SmartNICs offer Express Virtio (XVIO), which runs solely on Agilio SmartNICs, where it does not consume any host server cycles, and allows applications requiring the functionality of DPDK, the performance of SR-IOV, or the flexibility of Virtio to run on the same server.
We go into extensive detail about how XVIO works and how it compares to Virtio, SR-IOV, and DPDK in our “The Case for Express Virtio” white paper.
For this post, let’s suffice it to say that a 100 percent virtualized data center does not mean that any application can run on any server. However, with Netronome Agilio SmartNICs, it could mean just that.