The notion of deploying armed human soldiers on the ground to fight wars will disappear over time, according to one of America's top military scientists.
“We have to get used to the radical idea that we, human beings, will be just one species of intelligent beings,” Alexander Kott, chief of the Network Science Division of the US Army Research Laboratory, told the Conference on Applied Machine Learning for Information Security (CAMLIS) on Friday.
Kott predicted a dystopian future where human warriors share the battlefield with intelligent agents in the form of robots, sensors, smart weapons, autonomous vehicles, and wearable gizmos. These exist today to some degree, however, in the future they will be much more intelligent, and use machine-learning software to automatically take in fresh information and make decisions in a constantly changing environment.
“It’s coming and will be a reality in 20 years,” Kott said. "Humans are going to be a lot less visible and we will get used to it."
Cyber warfare is increasing, meanwhile, and foreign adversaries are attacking national grids, banks, and other private entities, he claimed. It’s a future that provides ample opportunity for using AI to analyze and counter incoming attacks.
“AI and machine learning is a triple edged sword," said Kott. "Agents will be targets, perpetrators, and defenders against attacks. Humans will probably be the least effective, and are often the weakest link in the cyber world."
Instead, Kott envisions a future where the bulk of the defense work will be done by intelligent artificial entities. Autonomous agents will actively patrol computer networks to detect any abnormal activities faster than humans can, and destroy enemy malware without getting a human out of bed.
The US Army Research Laboratory is working on these sorts of projects, although today's AI technology won’t be enough to make Kott’s vision a reality. He identified key problem areas, which he called the "five Ds" – dinky, dirty, dynamic, deceptive, and data.
Engineers, coders – it's down to you to prevent AI being weaponisedREAD MORE
Agents should only be considered truly intelligent if they can learn from a few – or a dinky number of – examples. Today's neural-network models require thousands, if not millions, of samples in order to learn how to identify patterns in the data. There's just not that many historical records of military scenarios that can be fed into a battlefield neural network. It’s also not easy to gather or share datasets involving military research, especially if it is highly classified.
This training material will also most likely be messy and dirty, and not like the nicely labelled examples most AI systems learn from.
Some inputs to future systems will also be deceptive. Neural networks are well known for being susceptible to adversarial data, and can even create deceptive data themselves. The systems will also have to learn to adapt to a dynamically changing environment.
“AI has always been very promising and has always faced skepticism for very good reasons,” Kott said. "AI is the future, and always will be. It has experienced wonderful successes in the last ten years now that neural networks have been renamed as deep learning.”
Kott told The Register that although today's deep-learning techniques can't or won't be used to build truly intelligent agents, he’s still hopeful that a general artificial intelligence is possible.
“The problem of finding the best route between two points was once considered an AI problem," he said. "Hundreds of dissertations were written about it. Now it’s been solved with GPS, it’s not AI anymore. The same thing will be true about deep learning too.” ®
We'll be examining machine learning, artificial intelligence, and data analytics, and what they mean for you, at Minds Mastering Machines in London, between October 15 and 17. Head to the website for the full agenda and ticket information.
Sponsored: Ransomware has gone nuclear