Neither a blade nor a rack – a platypus of sorts
The iDataPlex dx360 M4 server might officially start shipping on April 16, but there are some big customers in the supercomputing space that are buying these hybrid rack/blade machines to build HPC clusters.
Just this week, the US National Oceanic and Atmospheric Administration's National Weather Service said it was moving from a cluster of Power 575 servers using Power6 processors to a new 149 teraflops iDataPlex platform using Intel's Xeon E5-2600 processors. Last November the US National Center for Atmospheric Research, which does longer-range climate modeling, tapped IBM to replace its own cluster of Power 575 machines with a much larger 1.6 petaflops Xeon E5-2600 cluster, called "Yellowstone". The Forschungszentrum Juelich (FZJ) supercomputing Center in Germany is also building a 3 petaflops supercomputer called "SuperMUC" based on the new iDataPlex nodes.
The iDataPlex rack setup is half as deep as a standard server rack, and comes in a cabinet with two columns of machines, side-by-side. As long as you can deal with the wider and more shallow racks, you can get twice as many servers on a square foot of floor as you can with standard rack machines.
The iDataPlex dx360 M4
The dx360 M4 can stack two two-socket compute nodes in a single enclosure, and can use any Xeon E5-2600 from the power-sipping 60 watter all the way up to the turbine-spinning 130 watter. According to the IBM spec sheets, the top-bin eight-core 2.9GHz 135 watt E5-2690 is not supported on the dx360 M4, and neither is the four-core Xeon E5-2643, which runs at 3.3GHz.
Each dx360 M4 node has two processors and a total of 16 memory slots, the same as the HS30 blade server. There are four memory channels per socket, but you can only use two DIMMs per channel. Also, you can only use unregistered or registered DDR3 sticks – no LR-DIMM.
The machine has two PCI-Express 3.0 slots on riser cards and a PCI-Express 3.0 x8 mezzanine card that can be used for either 10GE or InfiniBand networking. Each node in the two-node system can have two Gigabit Ethernet ports, and there's one 3.5-inch drive bay for local storage.
One interesting bit about the iDataPlex design is that the power supplies and disk slot are off to the left side and pulled out a little, with all of the peripheral and networking slots in the front of the machine, and the CPUs and memory at the back. Here's what it looks like:
The dx360 M4 server supports the latest releases of Microsoft Windows Server 2008 R2, Red Hat Enterprise Linux 5 and 6, and SUSE Linux Enterprise Server 10 and 11. The iDataPlex dx360 M4 2U chassis costs $455, and a compute node with no processors or disk installed, but 16GB of memory, costs $3,969. You can put two Nvidia Tesla GPU co-processors into each node of the dx360 M4.
Finally, IBM did one more thing, and this was on its existing BladeCenter HX5 blade servers, which launched two years ago with the Xeon 7500 processors and which were updated last year with the Xeon E7s. These Intel chips are for larger SMPs or two-socket boxes that need fatter memory. Starting March 16, IBM will offer four-rank DDR3 memory sticks at 16GB capacities running at 1.35 volts, lower than the standard 1.5 volt memory.
Gigabyte for gigabyte, the low-voltage memory offers capacity for approximately 20 per cent less juice burned, and in machines with 256GB across two sockets, the power savings can add up. And now customers can have fatter memory sticks than last year, when 8GB was the upper limit for the HX5 machines using low-volt memory. The 1.35 volt memory is only supported with the Xeon E7 chips, which have support for it etched into their on-chip memory controllers. ®