This article is more than 1 year old

Mellanox uncloaks SwitchX network switch-hitter

Chip turns InfiniBand, Ethernet, and Fibre Channel triple play

Servers have been virtualized, storage has been virtualized, and now it's the network's turn, thanks to Mellanox Technologies, a maker of chips and switches running the InfiniBand and Ethernet protocols.

With the advent of its SwitchX multi-protocol ASICs, the chip designers at Mellanox have come up with a single chip that it believes will be the backbone of best-in-breed InfiniBand, Ethernet, or multiprotocol switches that will eventually be able to change their protocols on demand.

Mellanox has a vested interest in both InfiniBand and Ethernet, especially after the $218m acquisition of rival Voltaire last year, which gave Mellanox additional InfiniBand goodies as well as 10 Gigabit Ethernet switches. John Monson, vice president of marketing at Mellanox, tells El Reg that the SwitchX chip, which puts a single InfiniBand ASIC next to an Ethernet ASIC in a single chip package, is no gimmick.

"This is the first time someone has built a protocol-independent ASIC, as far as we know," Monson says. "The funny thing [is that] there isn't much difference between the two, Ethernet and InfiniBand, inside the chip. But don't get the wrong idea. When SwitchX is running as an Ethernet switch, it's not running in an emulated mode or in a wrapper. It is an Ethernet switch. And when it is running as InfiniBand, it is similarly not emulated or wrapped."

So SwitchX really does swing both ways. This isn't college experiment. And in that regard, the SwitchX ASIC is just like the multiprotocol ConnectX-2 host adapters that Mellanox launched 18 months ago.

A massive wafer-baking shrink and the advent of lossless Ethernet protocol enhancements (basically stolen from InfiniBand), among other things, have come together to make the converged SwitchX ASICs possible.

The current generation of InfiniScale IV chips, which Mellanox designed to support 40Gb/sec (quad data rate) InfiniBand, are implemented in a 90-nanometer process by Taiwan Semiconductor Manufacturing Corp. That chip has around 450 million transistors and was able to power a switch in 2008 that pushed 10Gb/sec InfiniBand ports for under 75 watts at the ASIC level.

The SwitchX chip, which is the fifth generation of ASICs that Mellanox has cooked up, shrinks down to 40 nanometers and boosts the transistor count to 1.4 billion; it comes in a 45mm by 45mm package.

Mellanox SwitchX chip

The Mellanox switch hitter: SwitchX

The SwitchX chip has a lot more functionality and burns a lot less juice, and at 4Tb/sec of unified switching and routing bandwidth, it packs a potent bandwidth punch. The SwitchX chip has 144 Serdes (serializer/deserializer) serial-parallel blocks, which can run at between 1Gb/sec and 14Gb/sec. The chip knows how to do InfiniBand, Ethernet, and Fibre Channel protocols and their hybrid variants.

On the InfiniBand side, that includes Fibre Channel over InfiniBand (FCoIB), Ethernet over InfiniBand (EoIB), and Remote Direct Memory Access (RDMA). SwitchX also includes the InfiniBand-to-Ethernet and InfiniBand-to-Fibre Channel bridging functions that were part of the Mellanox's BridgeX line of chips. On the Ethernet side, you can do data-center bridging (DCB), Fibre Channel over Ethernet (FCoE), and RDMA over Converged Ethernet (RoCE), as well as Ethernet-to-InfiniBand and Ethernet-to-Fibre Channel bridging.

As El Reg detailed last summer, the InfiniBand and Ethernet roadmaps are going to more or less keep pace with each other in the foreseeable future. The next rev of InfiniBand is called Fourteen Data Rate – FDR for short – which refers to the 14Gb/sec lane speed. At the server level in a network, an InfiniBand port has four lanes, which means it has an aggregate peak bandwidth of 56Gb/sec.

Mellanox, QLogic, and a few other players are gearing up for FDR InfiniBand, and at the same time all of the Ethernet switch makers are starting to bump up to 40Gb/sec. The SwitchX chips will be able to do both, as well as support lower-speed Ethernet or InfiniBand ports, says Monson.

Mellanox SwitchX block diagram

The SwitchX block diagram

The SwitchX chip can support 36 4x InfiniBand ports running at anywhere from the 10Gb/sec of the original Single Data Rate (SDR) InfiniBand all the way up to the 56Gb/sec of the brand-new FDR InfiniBand. The SwitchX chip can also pump up 64 Ethernet ports running at 10Gb/sec or 20Gb/sec speeds, or 36 ports running at 40Gb/sec, and support two dozen Fibre Channel ports running at 2, 4, or 8Gb/sec.

Monson says that Mellanox expects a SwitchX chip to run at around 55 watts when in a switch to support 40 Gigabit Ethernet ports; it will run at around 40 watts doing 10 Gigabit Ethernet ports and at around 72 watts for 56Gb/sec InfiniBand ports. That's about 1.5 watts for a 40 Gigabit Ethernet port, 0.62 watts for a 10 Gigabit Ethernet port, and around 2 watts for an FDR InfiniBand port.

Bandwidth and power consumption are obviously on the mind of switch and router buyers, but so is latency. It will take around 175 nanoseconds to do a port-to-port hop on a SwitchX device configured to run FDR InfiniBand, and for a 40Gb/sec Ethernet configuration it is probably on the order of a little over 200 nanoseconds, says Monson. If you want to do routing functions on top of these protocols, then add another 100 nanoseconds or so. This is about half what current QDR InfiniBand or 10 Gigabit Ethernet switches can do, according to Mellanox.

Mellanox will make its own switches using the SwitchX ASIC as well as selling the chip to others. Monson says that "within a short window of time," Mellanox will release switching products.

In phase one of their rollouts, the company will ship switches that are configured either as Ethernet or InfiniBand devices. A second phase will involve the creation of multimode and hybrid Ethernet/InfiniBand switches with integrated routing functions. It will probably take somewhere between 12 to 18 months to do the multiprotocol switch, Monson tells El Reg.

The good news is that 40Gb/sec Ethernet and QDR and FDR InfiniBand both use the same QSFP connectors, so – in theory at least – you will at some point be able to wire up your clusters of machines and switch protocols on the fly for all or portions of your clusters, as application workloads dictate.

In the meantime, Mellanox partners can get their hands on an evaluation board and switch that implements the SwitchX chip and its software development kit. ®

More about

TIP US OFF

Send us news


Other stories you might like