HPC

Intel, Cray bag $200m to build 180PFLOPS super for US nuke boffins

Monster Aurora machine to slurp 13MW crunching numbers on fuel and more


Intel and Cray have landed a $200m deal to build a 180-petaFLOPS supercomputer dubbed Aurora for the US Department of Energy.

Intel will provide the chips – expected to include next-gen 10nm Knights Hill processor cores – and Cray will integrate it all together. If you look at the numbers, that's a rather small amount of money for the level of oomph being delivered.

The machine, which can be redlined to perform 180 quadrillion floating-point math calculations a second, will be installed at the Argonne National Laboratory just outside Chicago. It will comfortably sit in the top slots of the world's most-powerful known 500 computers when it's eventually switched on.

"It is no accident the US dominates the top 500 supercomputers in the world; it's due to sustained investment and sustained federal commitments," Dr Lynn Orr, the under-secretary for science and energy at the US Department of Energy, told The Register by phone today.

"Other countries are doing their absolute level best to outperform us," he added, stressing it was vital America stayed ahead of its rivals.

"Have you ever been on an airplane, or checked the weather forecast? Then you've benefited from supercomputing, and we will be able to do those things even better in the future."

The Aurora supercomputer will go live in 2018 to tackle stuff from nuclear security problems to airflow simulations, and will replace the 10-petaFLOPS Mira installation as the lab's most powerful computer. That's right, Aurora will be 18 times faster than Mira at its peak.

When running full throttle, Aurora will consume 13MW of power, more than three times that of Mira, but still 4MW less than today's top known supercomputer: China's 55-petaFLOPS Tianhe-2.

Dr Orr said Aurora will crunch numbers analyzing, among many things, new fuel cells and batteries, nuclear energy, and simulating airflow around freight trucks racing along highways: US DoE computers have been used to reduce drag and increase the efficiency of some 18-wheeler freighters by 60 per cent, saving 5,000 gallons of gas per big rig per year, he claimed.

Intel told us the computer will be used to study "more powerful, efficient and durable batteries and solar panels; improved biofuels and more effective disease control; improving transportation systems and enabling production of more highly efficient and quieter engines; and wind turbine design and placement for improved efficiency and reduced noise."

Under the hood

Aurora will use Cray's new Shasta architecture, and consist of at least 50,000 nodes packing 7PB of memory, 150PB of filesystem storage, and Knights Hill processors, we understand. A second machine, Theta, a Cray XC super that can run up to eight petaFLOPS, also be installed next year for Argonne's scientists to use while waiting for its bigger sister to be built.

This system is a beast. And the real performance story has far less to do with the computational power of the cores than it does how data gets moved around ...
Read more on platform logo

"Intel has said that Knights Hill will use a 10-nanometer process, which is the one it will be ramping later this year for its processors for various client devices," writes Nicole Hemsoth, co-editor of our HPC sister site The Platform, which broke the news on Aurora this morning.

"The shift from the 14-nanometer processes used with Knights Landing to the 10-nanometer processes used with Knights Hill probably won’t yield a big change in core counts or clock speeds – maybe something on the order of a 30 percent to 50 percent boost in cores (call it somewhere between 90 and 100 cores) and about the same clock speed (somewhere around 1.2GHz).

"Knights Hill might deliver somewhere between 4TFLOPS and 4.5TFLOPS of peak floating point performance. It is safe to say that local memory on the package and addressable through DDR controllers – as well as bandwidth on these memories – will scale proportionately."

The DoE already has three of the top five publicly known supercomputers in the world (Titan, Sequoia and Mira) and the department is looking to further its lead in the coming years with three new installations to be known as the CORAL initiative.

When it goes live, Aurora will outstrip the compute power of the 33-petaFLOPS, 3.1-million-core Xeon-powered Tianhe-2 in China.

It would not, however, be the most powerful in world, or even the hottest in the DoE's arsenal. Fellow CORAL project Summit will claim those honors with a peak performance of 300 petaFLOPS. A third CORAL system, the 100-petaFLOPS Sierra Power9 cluster, will be installed at Lawrence Livermore National Lab. Both are due to go online in 2017. ®

Similar topics


Other stories you might like

  • Everything you wanted to know about modern network congestion control but were perhaps too afraid to ask

    In which a little unfairness can be quite beneficial

    Systems Approach It’s hard not to be amazed by the amount of active research on congestion control over the past 30-plus years. From theory to practice, and with more than its fair share of flame wars, the question of how to manage congestion in the network is a technical challenge that resists an optimal solution while offering countless options for incremental improvement.

    This seems like a good time to take stock of where we are, and ask ourselves what might happen next.

    Congestion control is fundamentally an issue of resource allocation — trying to meet the competing demands that applications have for resources (in a network, these are primarily link bandwidth and router buffers), which ultimately reduces to deciding when to say no and to whom. The best framing of the problem I know traces back to a paper [PDF] by Frank Kelly in 1997, when he characterized congestion control as “a distributed algorithm to share network resources among competing sources, where the goal is to choose source rate so as to maximize aggregate source utility subject to capacity constraints.”

    Continue reading
  • How business makes streaming faster and cheaper with CDN and HESP support

    Ensure a high video streaming transmission rate

    Paid Post Here is everything about how the HESP integration helps CDN and the streaming platform by G-Core Labs ensure a high video streaming transmission rate for e-sports and gaming, efficient scalability for e-learning and telemedicine and high quality and minimum latencies for online streams, media and TV broadcasters.

    HESP (High Efficiency Stream Protocol) is a brand new adaptive video streaming protocol. It allows delivery of content with latencies of up to 2 seconds without compromising video quality and broadcasting stability. Unlike comparable solutions, this protocol requires less bandwidth for streaming, which allows businesses to save a lot of money on delivery of content to a large audience.

    Since HESP is based on HTTP, it is suitable for video transmission over CDNs. G-Core Labs was among the world’s first companies to have embedded this protocol in its CDN. With 120 points of presence across 5 continents and over 6,000 peer-to-peer partners, this allows a service provider to deliver videos to millions of viewers, to any devices, anywhere in the world without compromising even 8K video quality. And all this comes at a minimum streaming cost.

    Continue reading
  • Cisco deprecates Microsoft management integrations for UCS servers

    Working on Azure integration – but not there yet

    Cisco has deprecated support for some third-party management integrations for its UCS servers, and emerged unable to play nice with Microsoft's most recent offerings.

    Late last week the server contender slipped out an end-of-life notice [PDF] for integrations with Microsoft System Center's Configuration Manager, Operations Manager, and Virtual Machine Manager. Support for plugins to VMware vCenter Orchestrator and vRealize Orchestrator have also been taken out behind an empty rack with a shotgun.

    The Register inquired about the deprecations, and has good news and bad news.

    Continue reading

Biting the hand that feeds IT © 1998–2021