HPC

Power-mad HPC fans told: No exascale for you - for at least 8 years

And here's why...


I recently stumbled upon a transcript from a very recent interview with HPC luminaries Jack Dongarra (University of Tennessee, Oak Ridge, Top500 list) and Horst Simon (deputy director at Lawrence Berkeley National Lab.) The topic? Nothing less than the future of supercomputers. These are pretty good guys to ask, since they’re both intimately involved with designing, building, and using some of the largest supercomputers to ever walk the earth.

The conversation, transcribed into a chat format in Science magazine, focused on the biggest challenge to supercomputing: power consumption. We can’t simply scale today’s petascale systems up into exascale territory – the electrical demands are just too much. The current top super, ORNL’s Titan, needs a little more than 8 megawatts to deliver almost 18 petaflops. If we scaled Titan’s tech to exascale (which means growing it by 18 times), we’d see power consumption at a whopping 144 megawatts – which, if you could even get it into the building, would cost something like $450m per year at current rates.

We’ve often heard that exascale systems will need to come in at 20 megawatts or less in order to be somewhat affordable. While the evolutionary improvements in power consumption have been significant over the last several years, they won’t be nearly enough to get us into that 20MW power envelope. In the interview, Dongarra and Simon talk about how we’re going to need some revolutionary technology (they mentioned stacked memory and optical interconnects) to get us to a point where we can even talk about firing up an exascale system.

In the words of NVIDIA chief executive Jen-Hsun Huang: "Power is now the limiter of every computing platform, from cellphones to PCs and even data centres." But power consumption is only the first, and highest profile, problem. They say we’re going to need to see changes – even breakthrough – on several fronts, including operating systems, applications, and even algorithms in order to bring exascale home. And breakthroughs aren’t free, nor even very cheap. Simon said that a “complete exascale program” could cost an additional $300m to $400m per year for 10 years – over and above what is being spent on HPC now.

Given the current economic climate, it isn’t surprising to learn that funding, at least from Western nations, isn’t hitting these levels. Which is why neither of these HPC authorities is betting on exascale by 2020.

There’s plenty more interesting discussion in the interview, including China’s changing role in HPC, the benefits of exascale and the way HPC technology trickles down into even consumer products. And with all that Really Big Data on the way, we could all soon be indirect beneficiaries of all the funds and research various companies and governments have invested in it. ®

Similar topics

Broader topics


Other stories you might like

  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading

Biting the hand that feeds IT © 1998–2022