This article is more than 1 year old

Meet Morpheus, the AI that'll show you how deep the universe's rabbit hole goes: Code can detect, classify galaxies from 'scope scans

Way easier than getting humans to scour petabytes of images by hand, looking for faraway systems

Astrophysicists have developed AI software to help scientists automatically detect and describe galaxies snapped by telescopes surveying the distant sky.

The program, known as Morpheus, was built over a two-year period by a computer scientist and an astrophysicist at the University of California, Santa Cruz.

Morpheus employs a range of computer vision algorithms, including a neural network, that segments objects in the image from the empty background of space, and analyses each detected galaxy pixel-by-pixel to classify its type, whether it’s disk, spheroidal, or irregular shaped. The goal is to trawl through petabytes of images, picking out faraway systems, far faster than humans can.

A paper detailing the code was published in The Astrophysical Journal this week.

The software supports images in the Flexible Image Transport System (FITS) format widely used for astronomical data. That means Morpheus can directly analyse a telescope’s data without having to convert it to other formats, like JPEG or PNG, first.

The programmers trained their system on 7,600 galaxy images snapped by NASA’s Hubble Space Telescope and checked by human astronomers. Brant Robertson, who worked on the code, told The Register it has high accuracy rates for galaxies that are more well-defined.

“When expert astronomers agree on the galaxy classification, Morpheus is 82 to 98 per cent accurate depending on the class of object," he said. "When detecting objects, Morpheus recovers over 98 per cent of the galaxies in the survey data used to train the model.” It may struggle if a galaxy appears blurry or is particularly oddly shaped, in other words.


AI beats astroboffins at sniffing out fast radio bursts amid the universe's clutter


The code for Morpheus is available here, though it probably won't run that quickly unless you have a supercomputer to hand. Luckily, the folks over at UC Santa Cruz have Lux, a system that contains "dozens of nodes that each have two 32GB Nvidia V100 GPUs."

"We are trying to make the code easy to use. We've provided tutorials in the form of Jupyter notebooks that show how to use Morpheus on existing data," Robertson told El Reg.

"The code will work on CPUs if needed, but it's optimized for use with GPUs. If people have an Nvidia GPU on their home system for gaming and access to the GPU enabled version of TensorFlow, then they can get pretty good performance from the model."

Robertson, a professor of astronomy and astrophysics, worked with Ryan Hausen, a computer science graduate student, to build the framework, which they hope can be used with data from the future Legacy Survey of Space and Time (LSST). That survey will start in 2022 when the Vera C. Rubin Observatory, under construction in Chile, has been built.

The LSST will employ a large eight-metre telescope attached to a 3.2 gigapixel camera to capture the entire southern hemisphere every three days over ten years. It’s expected to haul in 200 petabytes of data. Pictures of space may be pretty to look at, when there’s billions of objects to study in the images, it starts to slip into tedium, hence the need for AI automation.

Humans won’t be able to manually look through each image, anyway. “There are some things we simply cannot do as humans, so we have to find ways to use computers to deal with the huge amount of data that will be coming in over the next few years from large astronomical survey projects," said Robertson.

And that's where Morpheus comes in handy, automatically detecting galaxies among stars and planets, and classifying each one by its type, leaving boffins enough spare time to put on a cuppa at least. ®

More about


Send us news

Other stories you might like