This article is more than 1 year old

Nvidia signs up for an Italian Job: Building for Europe the 'world's fastest AI supercomputer' by 2022

You were only supposed to blow the bloody bytes off!

Europe is to build four Nvidia-Intel-powered supercomputers, one of which will be the most powerful super yet built for AI applications, the GPU giant reckons.

That top-end machine, nicknamed Leonardo, is expected to reach 10 exaFLOPS albeit at FP16 precision; supercomputers tend to be benchmarked using FP64, though FP16 is presumably good enough for AI. This is why Nvidia billed Leonardo as "the world’s fastest AI supercomputer," in that it will be the fastest publicly known computer... when executing machine-learning and data analytics algorithms using FP16 or lower. It will be dwarfed by other supercomputers when it comes to running workloads that require a precision greater than FP16.

Leonardo, we're told, will be packed with roughly 14,000 of Nvidia's latest Ampere A100 GPUs, and it will be operated in Italy by CINECA, a non-profit group made up of 70 universities as well as four government-funded Italian research labs and the state's Ministry of Education, University, and Research.

CINECA

Leonardo's new Italian home. Source: Nvidia. Click to enlarge

“Leonardo will be built from Atos Sequana nodes, each with four Nvidia Tensor Core GPUs and a single Intel processor,” Nvidia told us. We understand the Intel chip will be a 10nm Sapphire Rapids Xeon Scalable Processor, which is due to go on sale next year. The supercomputer will be put together by Atos, an IT biz headquartered in France, and use Nvidia-owned Mellanox's HDR Infiniband for networking.

CINECA will use the machine to study drugs, crunch through astrophysics problems, predict extreme weather events, and so on. It’ll use Nvidia CUDA libraries to accelerate workloads.

The four supercomputers talked up today will be part of the European High Performance Computing (EuroHPC) joint-undertaking, an EU-backed project in which member states to pool their computing resources. The group plans to build a total of three sub-exascale supercomputers – which includes Leonardo – that can rip through at least 100 PFLOPS at FP64, and five one-PFLOPS machines.

Marc Hamilton, lead of the worldwide solutions architecture and engineering team at Nvidia, told El Reg that making the top-500 list of the world's most fastest publicly known supers is “more of an art than a science,” and that Nvidia will “have to wait and see” where Leonardo ranks on the list when it’s complete in 2022.

Like Leonardo, the other three supercomputers all use Nvidia's A100 chips. MeluXina will also be built by Atos in Luxembourg, and will use 800 of the GPUs to perform nearly 500 petaFLOPs of FP16 AI compute. The next super, hosted at the IT4Innovations National Supercomputing Center in Czech Republic, will be built by HPE, and will pack 560 A100s and perform up to 350 petaFLOPS. Finally, the fourth super, to be hosted in Slovenia and dubbed Vega after the country's famed mathematician Jurij Vega, will again be put together by Atos, and feature 240 GPUs operating at a peak of 150 petaFLOPs with 1,800 HDR 200Gb/s Infiniband end points.

All four computers are capable of running simulations at higher and lower precisions, including 64FP and 32FP as well as bfloat16 and 8-bit integer. ®

More about

TIP US OFF

Send us news


Other stories you might like