Mellanox's next-gen Innova network adapter won't just pack the obligatory high-speed interfaces – it'll also embed a Xilinx FPGA.
Offloading workloads was already a key plank of Mellanox's adapter strategy, and that's apparently hit some buttons with customers – hence the FPGA.
Senior director of marketing Bob Doud told The Register that the incoming Innova-2 adapters extend that ability to "offload functions that aren't that friendly to software on the host CPU, and to make the networking functions go faster by accelerating difficult functions in the FPGA".
The adapters pair the Mellanox ConnectX-5 to the Xilinx Kintex ultrascale FPGA, configurable either to accelerate host applications or network applications.
Onboard connectivity – network interfaces, RDMA and PCIe – are configurable to either of these host acceleration ("look-aside") or network ("bump in the wire") acceleration targets.
As a bump in the wire, traffic from the Ethernet interfaces passes through the FPGA for network offload, before hitting the ConnectX-5 SoC and on to the host. In look-aside config, traffic is handed first to the SoC, which passes host acceleration workload traffic to the FPGA.
The PCI switch in the card can also be spilt into two paths.
The card also supports the OpenCAPI (coherent accelerator processor interface) because, Doud said, that's getting industry traction with backers like IBM.
"OpenCAPI is a way to connect directly into the processor – in IBM's case, Power9. It's an improved bus, similar to PCI Express, but PCIe is not a coherent interface, and OpenCAPI is.
"Our connection is running over eight lanes, and each lane is running at 25Gbps, so the peak throughput is 200Gbps. After overheads, you've got between 160 and 170Gbps direct from the processor to the FPGA... that allows some very interesting offloads to be pushed off to the FPGA."
There will be two versions of the card, supporting either dual 25Gbps Ethernet interfaces, or two 100Gbps interfaces configurable either as 200Gbps of Ethernet or 100Gbps each of Ethernet and Infiniband.
Doud noted that the Ethernet-Infiniband combo also means the card could be programmed to provide an efficient bridge between the corporate Ethernet and an Infiniband storage infrastructure.
Security applications like IPSec and TLS are a natural for inline processing, he said, along with DDoS and firewall workloads. These were already in the Mellanox vision, with the FPGA providing additional speed and programmability.
It's the look-aside workloads that the company hopes will attract a new market, with Doud citing machine learning, the fledgling FPGA-as-a-service business, blockchain acceleration, search optimisation, and analytics.
The Innova-2 would also be suitable for storage acceleration, Doud said, in NVMe fabrics, handling workloads like compression and deduplication.
And, of course, putting the FPGA on the NIC also saves slots for people building hyperscale environments.
Programming the FPGA
Doud said while Mellanox is providing some FPGA applications as pre-canned capabilities (for example, security acceleration), the company also expects customers who already have FPGA skills to bring their own "magic".
Xilinx's toolkits and development suite are provided with the adapters, and customers will have access to Xilinx's ecosystem partners.
Some Mellanox intellectual property is offered to developers by way of what Doud called a "shim".
"Take the Ethernet ports, for example. You get the PHY and the MAC layers from Xilinx, Mellanox behind that can provide IP to implement functions you might find in ConnectX, like offloads, packet processing, and so on."
Similarly, the PCIe MAC layer is provided by Xilinx, while Mellanox offers some of the DMA engines (for example to handle data movement) "so that the customer doesn't have to reimplement that basic plumbing".
While the company has no ambition to become a service company, it's assembled a team of FPGA engineers to help customers with their "knowledge of the board and the system". ®
Sponsored: Webcast: Simplify data protection on AWS