The world’s fastest deep learning supercomputer is being used to develop algorithms that can help researchers automatically design neural networks for cancer research, according to the Oak Ridge National Laboratory.
The World Health Organisation estimates that by 2025, the number of diagnosed new cases of cancer will reach 21.5 million a year, compared to the current number of roughly 18 million. Researchers at Oak Ridge National Laboratory (ORNL) and Stony Brook University reckon that this means doctors will have to analyse about 200 million biopsy scans per year.
Neural networks could help ease their workloads, however, so that they can focus more on patient care. There have been several studies describing how computer vision models can be trained to diagnose cancerous cells in the lung or prostate. Although these systems seem promising they’re time consuming and expensive to build.
The team working at ORNL, a federally funded research facility working under the US Department of Energy, however, want to change that. They have developed software that automatically spits out new neural network architectures to analyse cancer scans so that engineers don’t have to spend as much time designing the model themselves.
Known as MENNDL, the Python-based framework uses an evolutionary algorithm and neural architecture search to piece together building blocks in neural networks to come up with new designs. Millions of new models can be generated within hours before the best one is chosen, according to a paper released on arXiv.
“The end result is a convolutional neural architecture that can look for seven different types of cancers within a pathology image,” Robert Patton, first author of the study and a researcher at ORNL, told The Register.
The software is computationally intensive to run and requires a deep learning supercomputer like Summit. The ORNL supercomputer contains 4,608 nodes, where each one contains two IBM POWER9 CPUs and six Nvidia Volta GPUs. MENNDL can achieve 1.3 exaflops - a quintillion or 1018 floating point operations per second - when the code is running at mixed precision floating point operations on a total of 9,216 CPUs and 27,648 GPUs.
A single model can study seven different types of cancer
Although millions of potential architectures are created, the best one is chosen based on the neural network’s size, how computationally intensive it is to train, and its accuracy at detecting tumors in medical scans.
The images in the training dataset are split into patches; 86,000 patches were manually annotated to classify the tumors, where 64,381 patches contained benign cells and 21,773 contained cancerous ones. All the images represent seven different cancer types in the breast, colon, lung, pancreas, prostate, skin, and pelvic forms.
“The seven different cancer types are considered to be a single data set. As a result, MENNDL starts with some initial set of architectures, and then evolves that set toward a single network architecture that is capable of identifying seven different types,” said Patton.
The winning model achieved an accuracy score of 0.839, where 1 is the perfect score, and could zip through 7,033 patches per second. For comparison, a hand designed convolutional neural network known as Inception is slightly more accurate at 0.899 but can only analyse 433 patches per second.
It's official – Google AI gives you cancer ...diagnosis in real time: Neural net can spot breast, prostate tumorsREAD MORE
“Currently, the best networks were still too slow, creating a backlog of images that needed to analyzed. Using MENNDL, a network was created that was 16x faster and capable of keeping up with the image generation so that no backlog would be created,” said Patton.
In other words, the one built by MENNDL has a comparable performance to a hand-built design and can process cancer scans at a much faster rate. The researchers believe the network can “bring the rate of image analysis up to the speed of the rate of image collection.”
But the software is still a proof of concept, however. “It is important to note that the goal of MENNDL is not to produce a fully trained model - a network that can immediately be deployed on a particular problem - but to optimize the network design to perform well on a particular dataset. The resulting network design can then be trained for a longer period of time on the dataset to produce a fully trained model,” the paper said.
“Our goal with MENNDL is to not only create novel neural architectures for different applications but also to create datasets of neural architectures that could be studied to better understand how neural structures differ from one application to the next. This would give AI researchers greater insights into how neural networks operate,” Patton concluded. ®