This article is more than 1 year old
LUMI supercomputer puts GPU partition through its paces with hardcore science
Simulates the Sun's atmosphere, peers deep into Earth's interior, and can probably run Crysis
Finland's LUMI supercomputer has hit a new milestone, successfully completing the pilot phase of its GPU partition that extends the processing power of the system.
LUMI is the fastest supercomputer in Europe and the third fastest globally, according to the Top500 list published in November 2022. It was inaugurated in June last year, but the system's GPU partition had not fully been installed at the time.
Now a team of Norwegian researchers have completed the GPU pilot phase for LUMI, testing out the capabilities of the system with the GPU partition (LUMI-G) installed and operational.
The testing comprised 30 pilot projects covering solar atmospheric modelling, natural language processing (NLP) and imaging of seismic data, according to the LUMI team.
For the solar atmospheric modeling project, researchers at the Rosseland Centre for Solar Physics at the University of Oslo work with simulations of the Sun's atmosphere, and compare these with observations from solar telescopes to learn more about the workings of the Sun and other stars.
This involves in-house applications known as Bitfrost and Dispatch, which are coded to be highly parallel, so they use many processor cores to perform calculations faster. The research team is working on adapting the code to use GPUs.
The NLP project at the Department for Informatics at the University of Oslo covers research into automated translation, human-computer interaction via spoken language, and content recommendations and contextual advertising. This requires advanced deep learning models, the training of which requires enormous computing power.
Meanwhile, the seismic data project at the Department of Geoscience and Petroleum at the Norwegian University of Science and Technology (NTNU) aims to get detailed images of Earth's interior using seismic data recorded on land and in the sea.
To increase the level of detail, a new modeling technique is being used where seismic data is simulated using a comprehensive numerical model. The simulations are repeated for many sources, and the large number of calculations involved calls for the resources available with LUMI-G.
LUMI, installed at the IT Center for Science (CSC) datacenter in Kajaani, Finland, is a pre-exascale system provisioned under the auspices of the EuroHPC Joint Undertaking.
- Atos will be paid $29m over $1b UK Met Office supercomputer dispute
- Samsung slaps processing-in-memory chips onto GPUs for first-of-its-kind supercomputer
- If today's tech gets you down, remember supercomputers are still being used for scientific progress
- Durham Uni and Dell co-design systems to help model the universe
It is based on HPE's Cray EX architecture, like the Frontier exascale system at Oak Ridge National Lab in the US. The CPU-only partition comprises 1,536 dual-socket CPU nodes, each featuring AMD "Milan" Epyc processors and between 256GB and 1,024GB memory. The LUMI-G GPU partition has 2,560 nodes, each featuring a single custom AMD "Trento" Epyc chip and four AMD MI250X GPUs.
Andrey Kutuzov, a member of the Language Technology group using LUMI, said that onboarding was "relatively easy for those already familiar with HPC environments," and that in most cases, the code designed for Nvidia GPUs was found to run flawlessly on LUMI's AMD GPU systems.
Helmi upgrade
In November last year, LUMI was also upgraded with the addition of a quantum accelerator from the VTT Technical Research Centre of Finland. Known as Helmi, the quantum processor comprises 5 qubits and is being touted as a quantum accelerator or co-processor to work alongside LUMI's classical CPU nodes.
The EuroHPC JU has announced a call for tenders to select a vendor for JUPITER, which is intended to be the first European exascale system. This will be installed on the campus of the Forschungszentrum Jülich research institution in Germany, but will be owned by the EuroHPC JU and operated by the Jülich Supercomputing Centre. The closing date for tenders is February 13.
The EuroHPC JU will conclude a contract with the successful tenderer in order to acquire, deliver, install and maintain this supercomputer system. It will fund half the total cost of the new system, with the other half coming from the German Federal Ministry of Education and Research (BMBF) and the Ministry of Culture and Science of the State of North Rhine-Westphalia (MKW NRW). ®