This article is more than 1 year old
Europe sets out to squeeze every last drop of power from supercomputers
Because Ferraris are meant to fly, not amble along at 2mph
The European High Performance Computing Joint Undertaking (EuroHPC JU) has started a project to develop software capable of fully utilizing the capabilities of exascale and post-exascale systems.
EuroHPC said the project, called Inno4scale, will research how to redesign and reimplement algorithms so that HPC applications will be able to efficiently exploit the new generation of massively powerful supercomputers.
The Inno4scale consortium, coordinated by the Barcelona Supercomputing Center (BSC), will finance the development of novel approaches to algorithms by funding smaller projects that demonstrate an original proof of concept with high impact for exascale-supported applications.
Lawrence Livermore National Laboratory (LLNL) said it has begun installing the hardware for El Capitan, another exascale system. This is expected to be the third exascale-class supercomputer in the US and the most powerful in the world when it comes online next year, with performance projected to exceed 2 exaFLOPS.
Argonne National Laboratory's Aurora supercomputer is expected to be the second American exascale system.
El Capitan will be dedicated to national security work, typically simulations to help ensure the safety and reliability of the US nuclear stockpile.
Inno4scale started this month with a total budget of €5 million ($5.4 million). A call for proposals will run until the end of September. These will be evaluated throughout autumn 2023 and the innovation studies, or subprojects, are expected to start in 2024 and run for 12 months. The most successful algorithms produced are expected to be taken up by HPC users in academia and industry, which it is hoped will lead to performance and energy efficiency gains.
- Supercomputing AI service among HPE's freshest GreenLake fare
- Intel sprinkles 12-qubit quantum test chips into the hands of researchers
- Nvidia to power more supercomputers as AI frenzy kicks in
- SambaNova injects a little AI mojo into US supercomputer lab's nuke sims
EuroHPC is planning two exascale supercomputers for Europe. Last year it was announced that Germany will host Jupiter, the "Joint Undertaking Pioneer for Innovative and Transformative Exascale Research" at the Jülich Supercomputing Centre (JSC) near Aachen. In June, it was disclosed that the second (so far unnamed) exascale system will be built by the Jules Verne consortium, led by France with the participation of the Netherlands.
Part of the problem is the sheer scale of the hardware involved. The first exascale computer, Oak Ridge National Laboratory's (ORNL) Frontier, is made up of 9,472 nodes in 74 cabinets, each of which has one CPU, 4 GPUs and 5TB of flash memory, and more powerful systems are already planned.
Simply coordinating parallel software instances and moving data around such systems presents a challenge, as Dr Rob Akers of the UK Atomic Energy Authority (UKAEA) noted recently when talking about the compute resources needed to develop Britain's prototype nuclear fusion reactor.
"The enormous power of exascale is fantastic. The problem is, exploiting that power is going to be incredibly challenging. Jack Dongarra, a pioneer in the high performance computing world, likened exascale to buying a Ferrari but then only being able to drive it at two miles per hour. And that's because data movement is a real big challenge," Dr Akers said.
EuroHPC said the Inno4scale consortium will work to ensure that the upcoming European exascale supercomputers can be used to their full potential and solve previously unsolvable computational challenges in industry, science, and public administration. ®