This article is more than 1 year old
Sun buffs InfiniBand for Constellation supers
Over two petaflops sold
The second-generation InfiniBand switch that Sun Microsystems has been showing off since last November made its debut this morning at the International Supercomputing Conference in Hamburg, Germany. The new switch - coupled to new servers based on Intel's "Nehalem EP" Xeon 5500 processors as well as existing quad-core "Shanghai" Opterons (and soon to be six-core variants) - is the core of the upgraded Constellation HPC clusters that Sun has been pushing for two years as a means of getting back into the supercomputing space.
The new "Project M2" 648-port modular quad data rate InfiniBand switch - as well as two low-end fixed-port switches that run their ports at the same 40 Gb/sec speed - are all based on new InfiniBand protocol chips made by Mellanox. (That vendor launched its own line of switches that span up to the same 648 ports running at QDR speeds yesterday ahead of ISC '09). Sun was previewing its QDR InfiniBand switches as well as its Nehalem EP blade servers and some integrated storage (with solid state drives) aimed at HPC customers, and now, it is ready to start shipping boxes.
According to Michael Brown, marketing manager for HPC at Sun, the company has sold over 2 petaflops of Constellation machinery and about half of that is based on the new Nehalem machines that were announced two months ago and the new QDR InfiniBand switches. "That's a pretty big chunk of business," says Brown with a certain amount of satisfaction.
To be fair, the Constellation boxes have been a bright spot for Sun, which is finally getting some play on the Top 500 list of supercomputers. About a quarter of the petaflops that Sun has shipped or that are on order for Constellation boxes come from one machine, the "Ranger" Constellation box at the University of Texas, with a few other big deals contributing tens of teraflops on top of that. Constellation needs a lot more sales, as do Sun's generic rack and blade servers for customers who don't want to adopt InfiniBand and who might prefer cheap Gigabit Ethernet or alternative 10 Gigabit Ethernet switching.
A single Constellation rack has 48 full-height or 96 half-height blade servers, plus the switching and storage, for a maximum of 768 cores using Nehalem EP Xeon or Shanghai Opteron processors. Various labs that are thinking well below the petaflops performance level that IBM, Cray, Silicon Graphics, and Sun are chasing (and to a lesser extent, so are Dell and Hewlett-Packard) and are looking at buying Constellation machines that span only one or two racks. The adoption of the six-core Istanbul Opterons sometime in the next quarter in the X6240 and X6440 blade servers, which will only require a BIOS update on the blades, certainly won't hurt sales of smaller racks, allowing customers to pack 1,152 cores in a rack.
Brown says that Sun's HPC business is more than just Constellation boxes, but was not at liberty to say what percentage of Sun's HPC sales come from outside of Constellation setups. As an example, he says that the University of North Carolina at Chapel Hill has bought seventeen of Sun's X4600 Opteron servers (which each have 16 cores) plus some storage and its Grid Engine gridding software to make a baby cluster. This setup at UNC includes 45 Sun workstations as well as a mix of storage, and it harkens back to the kinds of deals Sun used to do all the time back in the 1990s, deals that made it a name in academic computing right beside Digital Equipment.