This article is more than 1 year old

Cray notches another XE6 super sale

Multi-petaflops Cascade in the bag

Cray is getting traction with its XE6 supercomputers, launched this past May summer and first shipped at the end of July. The University of Stuttgart – which has a bunch of scalar and vector systems from IBM and NEC, as well as some hybrid machines and a baby Cray XT5m – is moving into Cray systems in a big way.

The university's High Performance Computing Center Stuttgart (HLRS) said on Tuesday that it was shelling out more than $60m to get its hands on a new XE6 supercomputer that is to be delivered and to go into production during 2011. That machine will eventually be upgraded to the future "Cascade" massively parallel super that Cray is developing in conjunction with the Defense Advanced Research Projects Agency's High Productivity Computing Systems program. (IBM's "Blue Waters" Power7-based monster and its PERCS programming environment are also being developed under the HPCS program at DARPA).

Cray was being cagey about how many cores and how much flops it will cram into the Stuttgart box, but a spokesperson for the company said that the machine would ultimately scale up to be a multi-petaflops machine under the deal. The upgrade to the Cascade super is expected to be completed by the second half of 2013.

HLRS goes way back with Cray, having used Cray vector machines from the Cray Research days and a Cray T3E parallel super more recently. But in the past decade, the university and its corporate partners have been running their applications on a mix of NEC vector supers and NEC and IBM parallel x64 clusters, with a dash of Cell co-processors and a splash of Nvidia GPU co-processors here and there for spice in the clusters.

The biggest machine at HLRS is the NEC Nehalem Cluster (very literal name, that) with 700 two-socket server nodes linked with both InfiniBand and Gigabit Ethernet switches that is rated at 62 teraflops. A second machine built by IBM called hwGRID that is comprised of 498 two-socket x64 and seven Cell-based blade servers and some Nvidia Quadro FX 5800 GPUs lashed together with InfiniBand switches from Voltaire that is rated at 46 teraflops. The university lab also has a 19.2 teraflops NEC SX-9 vector supercomputer with 192 vector units in a dozen nodes (and a node is a big ole rack, not some pizza box made out of tin) and an ancient NEC SX-8 with ten nodes and 80 vector processors that is rated at 1.2 teraflops.

HLRS started moving back toward Cray systems last year, and was in fact the launch partner for the XT5m midrange parallel super that Cray announced in early 2009. The XT5m machine at HLRS has 112 two-socket server nodes and is rated at 8.5 teraflops.

Cray has been upgrading its top-end massively parallel supercomputer in a piecemeal fashion over the past two years, culminating in the components that are assembled as the XE6 system. The current eight-socket blade servers used in the XE6 use twelve-core "Magny-Cours" Opteron 6100 processors from Advanced Micro Devices, but originally debuted in the XT6 and XT6m supers with their "SeaStar2+" interconnect; the blade design is not substantially different from that used with the six-core "Istanbul" Opteron 8400 processors, except for a socket change for the CPUs.</p.

The interconnect is not hardwired to the board, however, and that means Cray could slide out the SeaStar2+ chip modules and slap in the "Gemini" interconnect modules in May, creating the XE6. The final bit of secret sauce that makes the resulting machine an XE6 is called the Cray Linux Environment 3.0, which debuted in April and which tricks the modified version of Novell's SUSE Linux Enterprise Server running on an XT6 or XE6 machine into thinking that the SeaStar2+ or Gemini interconnects for the cluster nodes is an Ethernet link with some MPI code playing traffic cop. This means customers don't have to recompile Linux cluster code for x64 machines using MPI and Ethernet to run on an XE6 box.

The SeaStar and Gemini interconnects use HyperTransport links coming off the Opteron processors to hook into ASICs created by Cray that lash the server nodes together in the cluster, thereby bypassing the PCI bus and whole chunks of the network stack in a normal server. This is great right up to the point where AMD doesn't deliver chips on time, or Intel switches to a similar but incompatible QuickPath Interconnect scheme. The future "Aries" interconnect being developed under the Cascade project will connect into systems through the PCI-Express bus, allowing Cray to mix-and-match processor vendors and types within a single system.

The Gemini interconnect, which you can read all about here, splits the difference between SeaStar and Aries interconnects. Gemini has a 48-port router embedded on its ASIC (compared to six on the SeaStar) with an aggregate bandwidth of 168 GB/sec and can deliver 100 times the message throughput of the SeaStar2+ interconnect. The SeaStar2+ interconnect could scale to around 250,000 cores with its available bandwidth, but Barry Bolding, vice president of scalable systems at Cray, has told El Reg that the Gemini interconnect can scale to 1 million cores easily and has a theoretical limit of around 3 million cores (assuming AMD keeps adding cores to its chips at a predictable pace) and multiple sustained petaflops of number-crunching performance.

Cray has not said what the future Cascade machines and their Aries interconnect will deliver in terms of scalability, but it will no doubt go higher. DARPA is footing the bill for the development of the Aries interconnect and the server nodes based on future (and unnamed) Xeon processors from Intel. The point is: the University of Stuttgart did not have to go all the way to Cascade to get multiple petaflops of oomph, but there are some other goodies in there that makes HLRS want to put a down payment on a Cascades machine right now.

The University of Stuttgart is getting what seems like either a great price for a multi-petaflops parallel super. Cray has been charging around $45m per petaflops on XT6 and XE6 deals where data has been available. If the future HLRS Cray machine weighs in at around 2 petaflops, the new price for that much capacity seems to be $30m, and it could be lower if the HLRS machine has more capacity than this.

In a related announcement, Cray said in a filing with the Securities and Exchange Commission that it had completed milestone number 8 on the Cascade project, worth $12m, that DARPA would kick in the fund in time for a significant majority and possibly all of the funds to be credited against Cray's research and development expenditures for Cascades in the fourth quarter. ®

More about

TIP US OFF

Send us news


Other stories you might like