Microsoft offloads networking to FPGA-powered NICs

This is how Azure just hit 30Gbps of throughput – and how clouds are being built now

12 Reg comments Got Tips?

Microsoft has switched on new network interface cards packing field-programmable gate arrays and announced that doing so has let it hit 30Gbps of throughput for servers in Azure.

Redmond’s talked up these “SmartNICs” since late 2016 and even detailed (PDF) their workings to the Open Compute project.

It has now revealed the NICs are in operation in a Friday post that says they are generally available on its D/DSv2, D/DSv3, E/ESv3, F/FS, FSv2, and Ms/Mms instance types, running Ubuntu 16.04, RHEL 7.4, CentOS 7.4, SUSE Linux Enterprise Server 12 SP3, Windows Server 2016 and Windows Server 2012R2.

Microsoft says the new NICs, plus off-the-shelf tech like SR-IOV, make it possible to hit up to 30Gbps of throughput on Azure servers, by getting NICs to do the work rather than leaving networking to the CPU alone. With Meltdown and Spectre hobbling CPUs, that helping hand may be even more appreciated than Microsoft anticipated.

Significantly, Microsoft’s news comes after AWS revealed its own “Project Nitro" effort to offload networking and other functions from the CPU to a custom ASIC.

Microsoft said it chose FPGAs rather than ASICs because the latter lack agility, suggesting the possibility of a continuously—integrated networking stack in which hardware and software are constantly mutually optimized.

History suggests that with two big clouds moving work off the CPU, other vendors will soon offer similar technologies – and also that someone will soon offer up a new piece of jargon blending DevOps and networking. ®


Biting the hand that feeds IT © 1998–2020