This article is more than 1 year old

Don't trust deep-learning algos to touch up medical scans: Boffins warn 'highly unstable' tech leads to bad diagnoses

Now is not the time for Hollywood 'zoom and enhance' fantasies

Be wary of medical scans enhanced by AI algorithms: the software is prone to making tiny errors that could lead to incorrect diagnoses, a study has warned.

Some scientists argue that deep-learning code could reduce the time spent conducting medical scans if the algorithms can automatically improve image quality for medics and computer programs to assess.

However, findings published in the Proceedings of the National Academy of Sciences this week show the results are often flawed. Small details, like tumors, may be blurred or removed altogether during the so-called enhancement, or unwanted flecks of noise may pop up, causing concern for doctors.

“There’s been a lot of enthusiasm about AI in medical imaging, and it may well have the potential to revolutionise modern medicine: however, there are potential pitfalls that must not be ignored," said Anders Hansen, co-author of the study and an associate professor at the University of Cambridge's department of applied mathematics and theoretical physics in Blighty.

"We’ve found that AI techniques are highly unstable in medical imaging, so that small changes in the input may result in big changes in the output."


Enjoy a tipple or five? You might need this AI system to tell you when it's time for a new liver


The academics tested six convolutional neural networks that enhance MRI, CT, and NMR scans. They fed each network various images where the images may contain small tumors in the brain, or where the images have slight imperfections, such as if the patient shuffled a bit during the scan.

“We found that the tiniest corruption, such as may be caused by a patient moving, can give a very different result if you’re using AI and deep learning to reconstruct medical images – meaning that these algorithms lack the stability they need," said Hansen.

The team believes these unstable algorithms are not reliable enough to enhance medical images in a clinical setting.

“There is a tremendous level of activity right now on developing deep learning algorithms for medical image reconstruction,” Ben Adcorck, co-author of the paper and an associate professor working at the department of mathematics at Simon Fraser University in Canada, told The Register.

“But these algorithms are poorly understood mathematically – in particular, we have no guarantees on whether or not they are robust. Hence, it’s vital to have procedures that can detect potential instabilities, so that unstable algorithms do not percolate into clinical applications.”

Instead, he recommends using more traditional methods that rely on compressed sensing. The academic team hope their analysis will be used by others developing image-reconstruction algorithms and by government agencies, such as the US Food and Drug Administration, to ensure systems are up to scratch before they’re approved for real-world use. ®

More about


Send us news

Other stories you might like