This US national lab turned to AI to hunt rogue nukes

All it needs to do is detect ■■■■■■■■■■ in the ■■■■■ at ■■■■■■ when the ■■■■■■■■

Researchers at America's Pacific Northwest National Laboratory (PNNL) are developing machine learning techniques to help the Feds crack down on potentially rogue nuclear weapons.

Suffice to say, it's generally illegal for any individual or group to own a nuclear weapon, certainly in the United States. Yes, there are the five officially recognized nuclear-armed nations – France, Russia, China, the UK, and the US – whose governments have a stash of these devices. And there are countries that have signed the United Nations' Treaty on the Prohibition of Nuclear Weapons, meaning they've promised not to "develop, test, produce, acquire, possess, stockpile, use or threaten to use" these gadgets.

So if anyone has a nuke in their possession, it's because they are a country in the official nuclear-armed club, they are a government that's produced its own nukes, a terrorist who stole, bought, or somehow built one themselves, or some other sketchy scenario, in America's eyes at least.

(Whether stolen or unsanctioned nuclear warheads are something worth worrying about, or just a Tom Clancy fueled daydream, is a topic we'll leave for another day, or the comments section.)

Detecting signs of undesirable nuclear activity depends on being able to correctly analyze the chemicals and infrastructure required to manufacture these specialist doomsday weapons. Steven Ashby, director of PNNL, described how the US Department of Energy-funded lab is using machine learning to identify nuclear threats.

And not just identify: the techniques allow it to pick up "threats quicker and easier" than before, we're told.

One method, which uses an autoencoder model, processes images of radioactive material to figure out where it came from and how it was made. The software produces a signature or fingerprint of the sample, and compares this against a database of electron microscope images taken from universities and other national laboratories. 

By looking at how similar these particles are to the library of images, analysts can estimate how pure the unknown sample is and trace its source materials to possible labs manufacturing the nuclear products. That's useful if you want to know whether the material is good enough to create a viable nuclear weapon, and who is behind it. Ashby said PNNL's work here had helped law enforcement home in on targets and speed up investigations.

As the lab put it, "radioactive material will have a unique microstructure based on the environmental conditions or purity of the source materials at its production facility." That unique structure, with the help of software, can be used to close in on which laboratory or factory produced it, or so we're told.

The International Atomic Energy Agency monitors nuclear reprocessing facilities in non-nuclear-armed states to make sure they, for instance, are disposing of plutonium generated in nuclear power plants properly and not secretly stashing the metal to produce weapons. 

Officials monitor these facilities in various ways from in-person inspections to sample analysis of resources. Another technique currently under development at PNNL involves training transformer-based software to directly track the activity of nuclear reprocessing labs and automatically spot suspicious behavior.

First, a virtual replica simulating a reprocessing facility is built. The data generated by this model tracking "important temporal patterns" is used to train the model. It predicts what patterns should be observed from various areas within a plant if it's being used for peaceful purposes, and if the data actually collected from a facility doesn't match the model's predictions, experts can be called to investigate further.

"Our experts are combining expertise in nuclear nonproliferation and artificial reasoning to detect and mitigate nuclear threats. Their aim is to use data analytics and machine learning to monitor nuclear materials that could be used to produce nuclear weapons," Ashby said.

These automated methods, however, are only used to detect signs of possible illegal nuclear activities. Human experts still need to verify and confirm reports.

"Machine learning algorithms and computers will not replace humans in detecting nuclear threats any time soon. But they may make it possible for people to discover important information and identify risks more quickly and easily," he concluded. 

The Register has asked PNNL for further comment and information. We suspect some details may be kept vague for security reasons. ®

More about

TIP US OFF

Send us news


Other stories you might like