Nvidia intros the 'SuperNIC' – it's like a SmartNIC, DPU or IPU, but more super
If you're doing AI but would rather not do InfiniBand, this NIC is for you
Nvidia has given the world a "SuperNIC" – another device to improve network performance, just like the "SmartNIC," the "data processing unit" (DPU), and the "infrastructure processing unit" (IPU). But the GPU-maker insists its new device is more than just a superlative.
So what exactly is a SuperNIC? An Nvidia explainer describes it as a "new class of networking accelerator designed to supercharge AI workloads in Ethernet-based networks." Key features include high-speed packet reordering, advanced congestion control, programmable I/O pathing, and, critically, integration with Nvidia's broader hardware and software portfolio.
If that sounds like what a SmartNIC or DPU would do, you're not wrong. The SuperNIC is even based on a current Nvidia DPU, the BlueField-3.
Nvidia's BlueField-3 SuperNIC promises Infiniband-ish network performance – if you buy Nvidia's fancy 51.2Tbit/sec switches – Click to enlarge. Source: Nvidia.
The difference is the SuperNIC is designed to work alongside Nvidia's own Spectrum-4 switches as part of its Spectrum-X offering.
Nvidia's senior veep for networking, Kevin Deierling, emphasized in an interview with The Register that the SuperNIC isn't a rebrand of the DPU, but rather a different product.
East-west vs north-south
Before considering the SuperNIC, it's worth remembering that SmartNICs/IPUs/DPUs are network interface controllers (NICs) that include modest compute capabilities – sometimes fixed-function ASICs, with or without a couple of Arm cores sprinkled in, or even highly customizable FPGAs.
Many of Intel and AMD's SmartNICs are based around FPGAs, while Nvidia's BlueField-3 class of NICs pairs Arm cores with a bunch of dedicated accelerator blocks for things like storage, networking, and security offload.
This variety means that certain SmartNICs are better suited, or at the very least marketed, towards certain applications more than others.
For the most part, we've seen SmartNICs – or whatever your preferred vendor wants to call them – deployed in one of two scenarios. The first is in large cloud and hyperscale datacenters where they're used to offload and accelerate storage, networking, security, and even hypervisor management from the host CPU.
Amazon Web Services' custom Nitro cards are a prime example. The cards are designed to physically separate the cloudy control plane from the host. The result is that more CPU cycles are available to run tenants' workloads.
This is one of the use cases Nvidia has talked up with its BlueField DPUs and has partnered with companies like VMware and Red Hat to integrate the cards into their software and virtualization stacks.
Bypassing bottlenecks
The second application for SmartNICs has focused more heavily on network offload and acceleration, with an emphasis on eliminating bandwidth and latency bottlenecks.
This is the role Nvidia sees for the SuperNIC variant of its BlueField-3 cards. While both BlueField-3 DPUs and SuperNICs are based on the same architecture and share the same silicon, the SuperNIC is a physically smaller device that uses less power, and is optimized for high-bandwidth, low-latency data flows between accelerators.
"We felt it was important that we actually named them differently so that customers understood that they could use those for east-west traffic to build an accelerated AI compute fabric," Deierling explained.
- With all eyes on OpenAI, Meta drags its Responsible AI team to the recycle bin
- Cerebras CEO puts Nvidia on blast for 'arming' China with top-tier GPUs
- Desperately seeking GPUs? AWS will let you reserve instances in advance – no refunds
- Broadcom says Nvidia Spectrum-X's 'lossless Ethernet' isn't new
InfiniBand-like network for those that don't want Infiniband
Those paying attention to large-scale deployments of Nvidia GPUs for use in AI training and inference workloads will know that many communicate over Infiniband networks.
The protocol is widely deployed throughout Microsoft's GPU clusters in Azure and Nvidia sells plenty of Infiniband kit.
For those wondering, this is where Nvidia's ConnectX-7 SmartNIC fits in. According to Deierling, a lot of the functionality required to achieve low-latency, low-loss networking across Ethernet is built into Infiniband, and so not as much computing power is required.
With that said, ConnectX-7 does add another layer of complexity to Nvidia's networking identity crisis. At least for now, Nvidia's data sheets still describe [PDF] the card as a SmartNIC, though Deierling tells us the biz is shying away from that descriptor due to the confusion it causes.
However, not every customer wants to support multiple network stacks and would instead prefer to stick with standard Ethernet. Nvidia is therefore positioning its Spectrum-4 switches and BlueField-3 SuperNICs as the tech that lets customers stick with Ethernet.
Marketed as Spectrum-X, the offering is a portfolio of hardware and software designed to work together to provide InfiniBand-like network performance, reliability, and latencies using 400Gbit/sec RDMA over converged Ethernet (ROCE).
While this avoids the requirement of managing two network stacks, it won't necessarily eliminate hardware lock-in. The individual components will work with the broader Ethernet ecosystem, but to fully take advantage of Spectrum-X's feature set, customers really need to deploy Nvidia's switches and SuperNICs in tandem.
But depending on who you ask, customers may not need to resort to Nvidia Ethernet kit, with Broadcom's Ram Velaga previously telling The Register "there's nothing unique about their device that we don't already have." He claimed that Broadcom can achieve the same thing using its Jericho3-AI switch or Tomahawk5 switch ASICs in conjunction with customers' preferred DPUs.
Whether or not that's true, major OEMs don't seem to be worried about it as Dell, Hewlett Packard Enterprise, and Lenovo have all announced plans to offer Spectrum-X to prospective AI customers – presumably alongside large orders of Nvidia GPU servers. ®