Hot on the heels of its introduction of two Ethernet-based I/O virtualization director switches Xsigo Systems will roll out a line of I/O director switches based on 40 Gb/sec, quad-data rate InfiniBand silicon.
The original VP780 I/O Director was based on dual-data rate (20Gb/sec) InfiniBand networking. The I/O director replaced multiple Ethernet server and Fibre Channel storage links coming out of the back end of blade or rack servers with a single, full-duplex 20 Gb/sec link back to the director switch.
This director switch then linked out to other servers or storage. The software running on the VP780 I/O Director could provide up to 64 virtual network interface cards or host bus adapters for storage to a single server and change them on the fly as workloads changed.
Xsigo launched a scaled down VP560 I/O Director in May; both the VP560 and VP780 switches required companies to install an InfiniBand host bus adapter on the server and use InfiniBand cabling back to the I/O director.
Because lots of companies don't want InfiniBand (despite its benefits on several fronts), in late August Xsigo introduced tweaked versions of these two virtual I/O directors that supported Gigabit and 10 Gigabit Ethernet links to and from servers. This meant the Xsigo switches could plug into any existing server's on-board Ethernet ports and virtualize those links - no extra hardware required.
The new VP780q and VP560q I/O Directors are based on QDR InfiniBand chips that Xsigo buys from Mellanox. Both directors offer 20 ports out to servers, with the difference between the two machines being how many I/O modules can be added to the box.
The VP780s (both the DDR and QDR speeds) come in a 4U chassis and have 15 I/O modules. The slower IB switch has 24 ports to servers and the faster one has 20 ports. The VP560s have the same number of IB ports, but sport only four I/O modules for linking out to external networks and storage. Each I/O module has four Gigabit Ethernet, one 10 Gigabit Ethernet, two 4Gb Fibre Channel ports, and an SSL encryption offload module.
So why do people need virtualized I/O switches that have 40Gb/sec of bandwidth? Because they are starting to get serious about server virtualization.
"We're seeing people get really serious about getting all of their applications virtualized," vice president of marketing Jon Toor explained. "This is a real big change from a year ago."
There are two things driving up bandwidth needs. For one thing, it is not uncommon these days to see 20 virtual machine hosts per physical server, and running a decent mix of I/O heavy and I/O light workloads, these 20 virtual machines can saturate a 20Gb/sec link from the server out to the I/O director when supporting both server and storage networking needs.
The other thing driving the need for 40Gb/sec links between servers and the I/O directors sold by Xsigo is customers are starting to virtualize I/O heavy workloads like Oracle databases and Exchange email servers.
With the bandwidth available in the new VP780q and VP560q I/O Directors, Toor says that companies can cram more than 50 virtual machines on a single server host and have enough bandwidth to keep them fed and applications running smoothly. That full-duplex QDR InfiniBand link is fat enough and fast enough to support 15,000 Netflix HD video streams at the same time.
The original VP780q (left) and VP560q (right) I/O Directors using QDR InfiniBand.
The VP780q and VP560q I/O Directors will be available in December, and they carry a premium price compared to the 20 Gb/sec units they replace. The VP560q costs $35,000 in a base configuration, compared to $20,000 for the VP560. Twice the bandwidth and 83 per cent of the ports for a little less than twice the price.
The VP780q will sell for $45,000, compared to $30,000 for the prior VP780. The VP780 Ethernet I/O Director costs $45,000 and the VP560 Ethernet I/O Director costs $35,000. These latter machines sport 32 10 Gigabit Ethernet ports and the varying degrees of I/O modules.
In addition to the new InfiniBand virtual I/O director switches, Xsigo is rolling out XMS 3.0, a new release of its management console for the directors. With this iteration of the XMS console, network and server administrators can groups of servers or groups of connections and manage them as a single unit instead of dealing with connections on a port-by-port basis.
With hundreds to thousands of servers in a cloudy pool of boxes, such groupings are the only way of managing connections. XMS 3.0 creates connectivity templates that standardize the virtual network interfaces so they can be created and changed all at once.
The GUI in the tool allows for pools of connections that are identical to be viewed as a pool, and performance of the I/O links can be monitored as a pool but also allow for drilling down into individual connections to locate performance bottlenecks. XMS 3.0 is available now and costs $10,000. ®