RSA McAfee – the infosec company, not that weird bloke – says rather than worry about ultra-smart AIs causing havoc all by themselves, we should instead focus on stopping the human element: the miscreants with their hands on the levers.
Speaking today at this year's RSA conference in San Francisco, McAfee chief technology officer Steve Grobman told attendees that modern machine learning, like every other breakthrough from fire to flight, is going to be a technology with no moral compass, and as such will be at the mercy of whoever is controlling or masterminding it.
Grobman hearkened back to the days when strong data encryption, in the hands of ordinary people, was controversial, and cryptographers debated whether the US government was right in its decision at the time to classify encryption tools as restricted technology. That stranglehold has since been lifted, and now encryption has become a tool for attackers and defenders.
"Technology doesn't comprehend morality," Grobman argued. "The exact same algorithm can protect data from theft, or hold an individual or organization for ransom."
Likewise, Grobman said that McAfee sees AI along similar lines. The infosec corp doesn't think machine learning on its own will be good or evil. Rather, the ways humans choose to manipulate the tech will be the deciding factor.
"We can't only focus on the potential, we must understand the limitations," Grobman added. "We must understand how AI will be used against us."
Two in five 'AI startups' essentially have no AI, mega-survey of nearly 3,000 upstarts findsREAD MORE
To help make his point, Grobman turned to Dr Celeste Fralick, McAfee chief data scientist. Fralick held up as an example efforts by McAfee to help track unlawfulness in San Francisco: specifically, a map pinpointing crimes and arrests, and a machine learning model to recommend where police could best be deployed to catch criminals based on that location data.
While some citizens and law enforcement groups could see obvious benefits of using software to map out safer spots to walk or travel through, and areas where patrols could be increased, criminals could put the technology to use in more sinister ways, plotting out where there were opportunities to commit crime with a far higher chance of avoiding capture.
Such will be the balancing act with machine learning going forward, and Grobman hopes that those working on machine learning tech take a moment to work out both the potential for benefit and the possible abuses their work could enable. And then how to control the bad uses, if possible.
"We can't allow fear to impede our progress, but it's how we manage the innovation that is the real story," he concluded. "We must embrace AI, but never forget the limitations. It is just math." ®