This article is more than 1 year old

InfiniBand and 10GbE break data centre gridlock

Bandwidth bonanza in the data centre

Analysis Data centre network pipes are getting choked up. Imagine Germany minus the autobahns or the US without interstate highways and you get the picture - cities trying to send goods and people by road to other cities and the single carriageway roads jamming up, consigning everybody to gridlock.

What were quaint little cottage industries in the data centre - Windows servers on slow Ethernet LANs - have turned into humming temples to automated IT mass production as bladed and virtualised and multi-core, multi-socket servers scream through their processing loads at Star Trek warp speed and then... wait... wait... for the slow network to cough up the next slug of server processor fuel: the data.

Even the high-performance computing (HPC) world is suffering. Although it has its own dedicated InfiniBand super-highways which make Ethernet look like snail mail compared to email, super-computers became addicted to processing cores and yesterday's 100-core model gave way to a 1,000-core model and multi-thousand-core super-duper-whooper-computers seem quite common these days. Dual-rate 20GBit/s InfiniBand (IB) isn't enough.

A network pipe delivering data to servers is just like a pipe bringing water to a shower head. If the same pipe has to serve two shower heads then each one gets half the water. Four heads means each gets a quarter of the pipe's water. Eight heads - you can see how it goes.

So data centre network pipe technology is getting upgraded as virtualised and bladed servers cry out for faster I/O to keep them busy. InfiniBand and Ethernet products from Alacritech, Mellanox and QLogic are bumping network speeds up two fold and tenfold respectively.

10GigE?

Mellanox has produced a Converged Network Adapter (CNA), a two-port 10GbE product that sits on a server's motherboard, the ConnectX ENt. Mellanox is primarily known for InfiniBand technology so 10GbE is a bit of a departure for it. The technology can support virtualisation acceleration features like NetQueue and SR-IOV, also I/O consolidation fabrics like Data Centre Ethernet (DCE), Fibre Channel over Ethernet (FCoE) and InfiniBand over Ethernet (IBoE). (With InfiniBand supporting Ethernet we could have an infinity of network recursion here.)

An eight-core server could use this product to deliver more than 1Gbit/s Ethernet bandwidth to each core. Mellanox says the ConnectX ENt cost is $200-300 per port vs $400-500 for a Fibre Channel HBA and $300-400 per port for a QLogic CNA. The product does not have a TCP/IP offload engine (TOE) on it or support iSCSI.

More about

TIP US OFF

Send us news


Other stories you might like