HPC

Europe's most powerful supercomputer is an all-AMD beast

First of Europe's pre-exascale systems inaugurated, hits top 3 even without GPU partition fully installed


HPE has scored another supercomputing win with the inauguration of the LUMI system at the IT Center for Science, Finland, which as of this month is ranked as Europe's most powerful supercomputer.

Picture from LUMI data center construction site at CSC's data center in Kajaani, Finland, 05/2020. Image: Esa Heiskanen, CSC (JPEG 4.40 MB)

Picture from LUMI data center construction site at CSC's data center in Kajaani, Finland. Image: Esa Heiskanen, CSC

LUMI - Finnish word for snow - is the first pre-exascale system under the EuroHPC Joint Undertaking and is based on HPE's Cray EX hardware architecture. The project is installed at the IT Center for Science (CSC) datacenter in Kajaani.

It is owned by the EuroHPC Joint Undertaking (JU) of the EU, and half the total €202 million ($210 million) budget for the beast comes from EU, with a quarter coming from Finland and the rest from the remaining members of the consortium of 10 countries involved.

Furthering research

The system is intended to serve as a platform for international research cooperation and for the development of artificial intelligence and quantum technology, according to the CSC. It is also expected to be used for the usual mix of scientific projects such as climate change simulations and medical research, but 20 percent of its capacity is intended to be available for industrial research and development activities, including small to medium enterprises (SMEs).

"LUMI will help solve societal challenges," said CSC managing director Kimmo Koski, "including climate, life sciences, medical, and there are of course many others."

He added that the system will be used for applications involving high performance computing (HPC), AI and data analytics, but also "where these meet and merge", which has been a common thread among HPC projects for the past several years.

As of 30 May, LUMI has already taken the third spot on the current Top500 list of world's fastest supercomputers, achieving a High-Performance Linpack (HPL) rating of 151.9 petaflops in benchmarks that were disclosed at the recent ISC22 conference in Hamburg.

However, not all the cabinets have been filled yet, with LUMI's GPU partition not yet fully installed, after which its performance is expected to grow to about 375 petaflops, with a peak performance potentially exceeding 550 petaflops.

A second pilot phase for selected users is scheduled to start in August, with the complete system expected to be generally available for users in late September.

As well as being intended for research to help tackle climate change, LUMI is also claimed to have green credentials by being run entirely from hydroelectric power, while the waste heat generated by the system contributes to heating nearby homes in the Kajaani area.

The Finnish permanent secretary for Education and Culture Anita Lehikoinen said that LUMI would be hugely beneficial for the volume of scientific research done in the country.

"It is important for Finland to be seen as an attractive destination for science and research," she said, adding that the country planned to increase spending on research and innovation to 4 percent of GDP by 2030, calling it "a worthwhile investment."

The HPE Cray EX architecture [PDF] that LUMI is built from is a blade-based, high-density design built with multiple units of a liquid cooled datacenter cabinet. Each cabinet holds eight compute chassis, with each chassis fitting eight blades for up to 64 compute blades and up to 512 processors per cabinet.

Each cabinet can also hold up to eight switch chassis fitted with HPE Slingshot interconnect switch blades.

According to CSC, the CPU-only partition comprises 1,536 dual-socket CPU nodes, each featuring AMD "Milan" Epyc processors and between 256GB and 1,024GB memory.

The GPU partition has 2,560 nodes, each featuring a single custom AMD "Trento" Epyc chip and four AMD MI250X GPUs.

LUMI also has 64 Nvidia A40 GPUs installed for visualization workloads, and sports a partition with large memory nodes with 32TB of memory between them.

The storage layer of LUMI is based on the Cray Clusterstor E1000 system and the Lustre file system, with 8 petabytes of flash and 80 petabytes of hard disk space. LUMI also has 30 petabytes of Ceph-based object storage.

According to CSC, the entire LUMI installation occupies nearly 300 square meters (c 3,229 square feet) of space, about the same area as two tennis courts.

HPE was recently involved in the Venado supercomputer project for the Los Alamos National Laboratory, and is working with European microprocessor designer SiPearl to jointly develop a supercomputer using SiPearl's Arm-based Rhea processor.

Last month, HPE and Cerebras Systems unveiled a new AI supercomputer in Munich, Germany, using HPE's Superdome Flex, while HPE itself inaugurated the Champollion supercomputer at HPE's Center of Excellence in Grenoble, France, using AMD-based Apollo computer nodes and Nvidia GPUs. ®

Broader topics


Other stories you might like

  • HPE unveils Arm-based ProLiant server for cloud-native workloads
    Looks like it went with Ampere's Altra and Altra Max processors

    Arm has a champion in the shape of HPE, which has added a server powered by the British chip designer's CPU cores to its ProLiant portfolio, aimed at cloud-native workloads for service providers and enterprise customers alike.

    Announced at the IT titan's Discover 2022 conference in Las Vegas, the HPE ProLiant RL300 Gen11 server is the first in a series of such systems powered by Ampere's Altra and Altra Max processors, which feature up to 80 and 128 Arm-designed Neoverse cores, respectively.

    The system is set to be available during Q3 2022, so sometime in the next three months, and is basically an enterprise-grade ProLiant server – but with an Arm processor at its core instead of the more usual Intel Xeon or AMD Epyc X86 chips.

    Continue reading
  • HPE, Cerebras build AI supercomputer for scientific research
    Wafer madness hits the LRZ in HPE Superdome supercomputer wrapper

    HPE and Cerebras Systems have built a new AI supercomputer in Munich, Germany, pairing a HPE Superdome Flex with the AI accelerator technology from Cerebras for use by the scientific and engineering community.

    The new system, created for the Leibniz Supercomputing Center (LRZ) in Munich, is being deployed to meet the current and expected future compute needs of researchers, including larger deep learning neural network models and the emergence of multi-modal problems that involve multiple data types such as images and speech, according to Laura Schulz, LRZ's head of Strategic Developments and Partnerships.

    "We're seeing an increase in large data volumes coming at us that need more and more processing, and models that are taking months to train, we want to be able to speed that up," Schulz said.

    Continue reading
  • HPE thinks your next GreenLake deploy will be a private cloud
    Plus: IT giant expands relationship with Red Hat and SUSE, tackles hybrid data fabrics

    Extending a public-cloud-like experience to on-prem datacenters has long been a promise of HPE's GreenLake anything-as-a-service (XaaS) platform. At HPE Discover this week, the company made good on that promise with the launch of GreenLake for Private Cloud.

    The platform enables customers "to have a cloud in their premises wherever the data is, whether it's at the edge, it's at a colo datacenter, or is at any other location," Vishal Lall, SVP and GM for HPE GreenLake cloud services solutions, said during a press briefing ahead of Discovery.

    Most private clouds up to this point have been custom-built environments strapped together with some automation, he said. "It was somewhat of an improvement over the DIY infrastructure, but it really wasn't private cloud."

    Continue reading

Biting the hand that feeds IT © 1998–2022