Cosmoboffins use neural networks to build dark matter maps the easy way

Ah yes, maybe generative adversarial models can be useful after all


Spinning up dark matter simulations is computationally expensive so a team of cosmologists are turning to AI models instead.

Generative adversarial networks or GANs are good at learning patterns from data and reproducing them in new samples. In this case, the team led by researchers from the Lawrence Berkeley National Laboratory used weak gravitational lensing maps as input to simulate more of the same images as output.

They named the model CosmoGAN and have published a paper in Computational Astrophysics and Cosmology earlier this month. Gravitational lensing provides opportunities for scientists to study the effects of dark matter in the universe. As light rays from distant galaxies reach Earth, they pass through the gravitational field of dark matter and are bent creating a lensing effect.

“A convergence map is effectively a 2D map of the gravitational lensing that we see in the sky along the line of sight,” explained Deborah Bard, co-author of the paper and a group lead for the Data Science Engage Group at Lawrence Berkeley National Laboratory.

"If you have a peak in a convergence map that corresponds to a peak in a large amount of matter along the line of sight, that means there is a huge amount of dark matter in that direction.”

Simulating these kind of maps is expensive since it requires heavy amounts of computing power to model realistic data points. CosmoGAN provides a cheaper alternative. The researchers trained CosmoGAN on 800 weak gravitational lensing convergence maps taken from previous simulations by other cosmologists to generate 1,000 new ones.

stars

Dark matter's such a pushover: Baby stars can shove weird stuff around dwarf galaxies

READ MORE

The GAN samples were similar to the maps that were created from numerical modelling. “We were looking for two things: to be accurate and to be fast,” said Zaria Lukic, a research scientist in the Computational Cosmology Center at Berkeley Lab. “GANs offer hope of being nearly as accurate compared to full physics simulations.”

At the moment the researchers are using them for 2D simulations, but hope to extrapolate them to 3D maps in the future. GANs are notoriously difficult to train, however, and the researchers hope to be able to create new virtual universes with properties they can control.

“The idea of doing controllable GANs is essentially the Holy Grail of the whole problem that we are working on: to be able to truly emulate the physical simulators we need to build surrogate models based on controllable GANs,” said Mustafa Mustafa, coauthor of the paper and a machine learning engineer at the National Energy Research Scientific Computing Center at Berkeley Lab.

“Right now we are trying to understand how to stabilize the training dynamics, given all the advances in the field that have happened in the last couple of years. Stabilizing the training is extremely important to actually be able to do what we want to do next.” ®

Broader topics


Other stories you might like

  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading

Biting the hand that feeds IT © 1998–2022