HPC

IBM 'Blue Waters' super node washes ashore in August

Big, big (blue) flops for big, big (green) bucks


Big Blue has been talking about the Power7-based "Blue Waters" supercomputer nodes for so long that you might think they're already available. But although IBM gave us a glimpse of the Power 775 machines way back in November 2009, they actually won't start shipping commercially until next month – August 26, to be exact.

The feeds and speeds of the Power 775 server remain essentially what we told you nearly two years ago. Today's news is that the Power 775 is nearly ready for sale, and the clock speed on the Power7 processors and system prices have – finally – been announced.

Formerly known as the Power7 IH node, the Power 775 is not a general-purpose server node, but rather an ultra-dense, water-cooled rack server that pushes density and network bandwidth to extremes. Speaking of density, there's not enough room between the components on the Power 775 server node – the processor units, the main memory, and the I/O units – to slip a credit card.

The brains of the Power 775 server are a multi-chip module (MCM) that crams four Power7 processors, each with eight cores and four threads per core, onto a single piece of ceramic substrate with a 5,336-pin interconnect. The chips, we now learn, run at 3.84GHz, right smack dab in the middle of the 3.5GHz to 4GHz range that IBM was anticipating. The chip package burns at 800 watts, which is why it needs water cooling.

The Power 775 node is 30 inches wide, 6 feet deep and 3.5 inches high (2U) – not exactly a small piece of iron. That node can have up to eight of the Power7 MCMs, for a total of 256 cores, on a single, massive motherboard. Each MCM has a bank of DDR3 memory associated with it, and these have big buffers to improve bandwidth and performance into and out of the processors.

Each bank has 16 memory slots per MCM, and IBM's plan two years ago was to use 8GB sticks – but now that we are in 2011, the company has jacked up the capacity to 16GB per stick, doubling the memory to 2TB per node.

Power7 IH HPC Server Node

The Power 775 HPC server node

The Power7 IH hub/switch is the nervous system of the Blue Waters machine, which comes in the same 5,336-pin package and which links the eight nodes on the board to each other, to their PCI-Express peripherals, and to other nodes in adjacent racks in the complete HPC system.

Each 30-inch rack can have up to a dozen of these Power 775 servers installed and the Blue Waters interconnect allows for up to 2,048 Power 775 drawers (for a total of 524,288 cores) to be linked together with 24TB of main memory and up to 230TB of disk/flash capacity each. Each Power 775 node delivers 8 teraflops of raw number-crunching power, and a 2,048-node machine that is extremely light on disk capacity would yield over 16 petaflops of raw performance.

Each Power 775 node has 16 PCI Express 2.0 x16 peripheral slots and one x8 slot. If a customer wants a lot of storage, there are 4U disk drawers that hold up to 384 small form factor disks that can be linked to the Power 775 nodes. Up to six of these disk drawers can be put into a single rack, and they include 376 600GB disk enclosures and eight 200GB solid state disks.

The Power 775 HPC cluster node requires AIX 7.1 with ServicePack 3 and a bunch of patches, and IBM says that it will eventually support Red Hat's Enterprise Linux (presumably 6.1 or later). There's no love for SUSE's Enterprise Linux Server 11 SP1 on these nodes.

If you want to build a Power 775-based super computer, you better get going on that government grant proposal. The base Power 775 node costs $560,097 with all of its cores activated and memory installed but not activated. It costs $2,690 to buy a pair of 8GB memory sticks and $5,199 for a pair of 16GB sticks, so that 2TB of memory will run you another $332,736.

Optical links for linking the hub/switch to the server nodes within the Power 775 and out to other nodes cost around $750 a pop, and you'll need to buy thousands and thousands of them. That 384-drive drawer will run you $473,755. Toss in the custom rack with base power and cooling for $294,404 and $50,443 for lift tools and ladders for servicing the nodes – a full rack weighs 7,502 pounds – and you're talking something on the order of $1.9m for a base machine with one server node, one I/O drawer, and one rack.

A balanced configuration, with eight Power 775 nodes and two disk nodes, will run you about $8.1m per rack and deliver 64 teraflops of raw computing oomph. Scale that up to 1,365 compute nodes and 342 storage nodes – assuming the workload needs a reasonable amount of local disk – and you are at 10.9 petaflops of raw performance, 2.7PB of memory, and 26.3PB of disk/flash storage. That will also run you something around $1.5bn at list price.

Obviously, IBM is not charging list price for this big, bad HPC box. ®

Broader topics

Narrower topics


Other stories you might like

  • IBM ordered to hand over ex-CEO emails plotting cuts in older workers
    Infamous 'Dinobabies' memo comes back to haunt Big Blue again

    Updated In one of the many ongoing age discrimination lawsuits against IBM, Big Blue has been ordered to produce internal emails in which former CEO Ginny Rometty and former SVP of Human Resources Diane Gherson discuss efforts to get rid of older employees.

    IBM as recently as February denied any "systemic age discrimination" ever occurred at the mainframe giant, despite the August 31, 2020 finding by the US Equal Employment Opportunity Commission (EEOC) that "top-down messaging from IBM’s highest ranks directing managers to engage in an aggressive approach to significantly reduce the headcount of older workers to make room for Early Professional Hires."

    The court's description of these emails between executives further contradicts IBM's assertions and supports claims of age discrimination raised by a 2018 report from ProPublica and Mother Jones, by other sources prior to that, and by numerous lawsuits.

    Continue reading
  • Germany to host Europe's first exascale supercomputer
    Jupiter added to HPC solar system

    Germany will be the host of the first publicly known European exascale supercomputer, along with four other EU sites getting smaller but still powerful systems, the European High Performance Computing Joint Undertaking (EuroHPC JU) announced this week.

    Germany will be the home of Jupiter, the "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research." It should be switched on next year in a specially designed building on the campus of the Forschungszentrum Jülich research centre and operated by the Jülich Supercomputing Centre (JSC), alongside the existing Juwels and Jureca supercomputers.

    The four mid-range systems are: Daedalus, hosted by the National Infrastructures for Research and Technology in Greece; Levente at the Governmental Agency for IT Development in Hungary; Caspir at the National University of Ireland Galway in Ireland; and EHPCPL at the Academic Computer Centre CYFRONET in Poland.

    Continue reading
  • HCL to end all support for old versions of Notes and Domino in 2024
    As if users needed any more reminders they’re stuck on a dying platform

    HCL has given users of versions 9.x and 10.x of its Domino groupware platform two years warning that they'll have to upgrade or live without support.

    Domino started life as Lotus Notes before IBM bought the company and milked the groupware platform for decades then offloaded it to India's HCL in 2018. HCL has since released two major upgrades: 2020's version 11 and 2021's version 12.

    Now it looks like HCL wants to maximize the ROI on those efforts – a suggestion The Register makes as the company today emailed Domino users warning them that versions 9.x and 10.x won't be sold as of December 1, 2022, and won't receive any support as of June 1, 2024.

    Continue reading

Biting the hand that feeds IT © 1998–2022