Xsigo is good for decluttering but can it clean up?

Facing up to data centre ethernet

Comment I/O virtualiser Xsigo has won another deal and successfully gone through a funding round in June. But its technology could just be a short-term fix to a server-edge network clutter problem.

Xsigo's virtual I/O directors are going in as part of a Compellent SAN at Wholesale Electric of Houston.

Like Virtensys, which also recently completed a funding round, it aims to clean up server cable and interface card clutter by virtualising the separate I/O paths onto a single wire and pushing the clutter to the network side of its box, the VP780 I/O Director. But data centre Ethernet (DCE) is coming with the promise of unifying all data centre networking onto a single Ethernet infrastructure.

Cisco and other Fibre Channel over Ethernet (FCoE) supporters say converge everything from the server edge to the storage arrays and WAN ports onto Ethernet. Mellanox and Voltaire used to say converge everything onto InfiniBand, but now are supporting Ethernet with InfiniBand being a high-performance computing niche networking product.

Both Virtensys and Xsigo say, in effect, forget about overall data centre networking clutter.

FCoE and Data Centre Ethernet (DCE) with its lossless, low-latency and multi-hop characteristics won't be available in a proper standard form for a year or more. Just concentrate on tidying up the server edge of data centre networks, the gateway to the various storage, LAN and WAN networks. You can save lots of cash by implenting a specific server-edge local network platform which represents itself to servers as virtual Ethernet network interface cards (NICs) and Fibre Channel Host Bus Adapters (HBAs).

Virtensys does it by providing a PCIe-based network between servers and its box. Proper NICs and HBAs get plugged into the box and are shared by servers which use them to connect to storage and LAN destinations.

Xsigo does it by providing a mini-InfiniBand (IB) network between the servers and its gateway box. There are IB Host Channel Adapters (HCAs) running Xsigo software on the servers and a 20Gbit/s IB link back to the I/O Director. The servers have Xsigo software which represents the HCA as virtual NICs and HBAs to up to 64 virtual machines inside each server.

The I/O Director provides, as a by-product, IB server-to-server links, but mainly acts as a gateway to the outside data centre networking world. It is an intelligent box which divvies up the IB bandwidth to each server into NIC or HBA sub-paths as required. Xsigo doesn't say what the processing power and software in the VP780 is based on.

On the network side of the box there are 15 I/O modules which can be 10-by-1GigE links, a single 10GigE link, or a dual 4Gbit/s Fibre Channel link. These modules contain Xsigo intelligence too, meaning commercial, off-the-shelf, NICs and HBAs can't be used.

There are 24-port expansion boxes available to add server links to the VP780 and management software to visulise and administer the server-edge network set up by Xsigo. The benefit is cost-savings by eliminating server edge interface and cable clutter. Wholesale Electric reduced its I/O infrastructure costs, it says, by 66 per cent going the Xsigo route.

IB supplier Mellanox has also developed a server-edge de-clutter product, BridgeX, which supports both a 40Gbit/s IB link to servers or a 10GigE one. Mellanox is sitting on the fence here, wanting to support both Ethernet and IB as a single server-edge network. Xsigo has gone full-tilt into IB.

Xsigo was founded in 2004, announcing itself and its technology in September 2007. Dell resells its kit and it received fresh funding from its investors in June; the amount, like the initial funding was not disclosed. We can guess that it was substantial, though, as Xsigo is developing a hardware product set; the HCAs, VP780 director and expansion boxes, and software to run on these products and in the servers.

The company has its headquarters in San Jose with sales offices in Australia, Japan and the UK. It did not say what the new money would be used for but we can reasonably assume it's going to fund continued product development and the sales and marketing operations. With Nehalem-based servers capable of supporting many more VMs, we can conceive of Xsigo bringing out 40Gbit/s IB links to the servers to provide the bandwidth the VMs would need, and a bigger I/O Director to support them.

Because Xsigo provides the interfaces to the data centre network, that means customers are dependent upon it to produce faster Fibre Channel interfaces, ones running at 8Gbit/s, and Converged Network Adapters (CNAs) needed for FCoE. Such developments will also need funding.

All networking suppliers are now saying to data center customers that servers have too many network interfaces, and storage arrays should stop using a unique physical Fibre Channel interface. Network convergence is the mantra and Ethernet is the protocol king, according to Brocade, Cisco and every other Ethernet switch supplier.

Fibre Channel storage array vendors are beginning to support FCoE too, witness NetApp, and all the iSCSI storage vendors support Ethernet already. The Ethernet bandwagon is getting unstoppable.

Mellanox, Virtensys and Xsigo say, okay, yes, that can happen in the future, but meanwhile, let's clear up the server edge networking mess right here and now by converging onto a separate server-edge-only network and providing an intelligent gateway to the mess outside.

The mess outside is catered for by providing Ethernet, Fibre Channel and IB ports from their gateway boxes. But if the mess outside reduces in a year or two, such that it is generally Ethernet, then why will servers need a shared infrastructure to access what has become an Ethernet gateway?

It's likely, isn't it, that Ethernet switch vendors will create their own virtualised server-edge switch with bandwidth apportionable between VMs using software? They could then say to customers: "The only server edge clutter you have now is that PCIe or IB server-edge network running between your servers and Ethernet. Why not clear it up, save cost and simpify things by having true end-to-end Ethernet between storage devices, data centre WAN network ports, and servers? Doesn't that make sense?"

That is a looming strategic problem that both Virtensys and Xsigo - less so Mellanox - will face. Meanwhile they can help customers save lots of money, they say, by just converging server-edge networking onto their own chosen technology platform.

Bigger Ethernet competitors or even server vendors might buy them for their technology. Equally they might take the view that Virtensys and Xsigo just provide a short-term fix until full data centre-class Ethernet arrives and provides wall-to-wall Ethernet right across the data centre.

It's going to be interesting to see how Virtensys and Xsigo deal with this problem. ®

Biting the hand that feeds IT © 1998–2021