This article is more than 1 year old

Your AI can't tell you it's lying if it thinks it's telling the truth. That's a problem

Who knows what evil lurks in the heart of ML? Er, nobody

Opinion Machine learning's abiding weakness is verification. Is your AI telling the truth? How can you tell?

This problem isn't unique to ML. It plagues chip design, bathroom scales, and prime ministers. Still, with so many new business models depending on AI's promise to bring the holy grail of scale to real-world data analysis, this lack of testability has new economic consequences.

The basic mechanisms of machine learning are sound, or at least statistically reliable. Within the parameters of its training data, an ML process will deliver what the underlying mathematics promise. If you understand the limits, you can trust it.

But what if there's a backdoor, a fraudulent tweak of that training data set which will trigger misbehavior? What if there's a particular quirk in someone's loan request – submitted at exactly 00:45 on the 5th and the amount requested checksums to 7 – that triggers automatic acceptance, regardless of risk?

Like an innocent assassin unaware they'd had a kill word implanted under hypnosis, your AI would behave impeccably until the bad guys wanted it otherwise.

Intuitively, we know that's a possibility. Now it has been shown mathematically that not only can this happen, researchers say, it's not theoretically detectable. An AI backdoor exploit engineered through training is not only just as much a problem as a traditionally coded backdoor, it's not amenable to inspection or version-on-version comparison or, indeed, anything. As far as the AI's concerned, everything is working perfectly, Harry Palmer could never confess to wanting to shoot JFK, he had no idea he did.

The mitigations suggested by researchers aren't very practical. Complete transparency of training data and process between AI company and client is a nice idea, except that the training data is the company's crown jewels – and if they're fraudulent, how does it help?

At this point, we run into another much more general tech industry weakness, the idea that you can always engineer a singular solution to a particular problem. Pay the man, Janet, and let's go home. That doesn't work here; computer says no is one thing, mathematics says no quite another. If we carry on assuming that there'll be a fix akin to a patch, some new function that makes future AIs resistant to this class of fraud, we will be defrauded.

Conversely, the industry does genuinely advance once fundamental flaws are admitted and accepted, and the ecosystem itself changes in recognition.

AI has an ongoing history of not working as well as we thought, and it's not just this or that project. For example, an entire sub-industry has evolved to prove you are not a robot. Using its own trained robots to silently watch you as you move around online. If these machine monitors deem you too robotic, they spring a Voight-Kampff test on you in the guise of a Completely Automated Public Turing test to tell Computers and Humans Apart – more widely known, and loathed, as a Captcha. You then have to pass a quiz designed to filter out automata. How undignified.

Do they work? It's still economically viable for the bad guys to carry on producing untold millions of programmatic fraudsters intent on deceiving the advertising industry, so that's a no on the false positives. And it's still common to be bounced from a login because your eyes aren't good enough, or the question too ambiguous, or the feature you relied on has been taken away. Not being able to prove you are not a robot doesn't get you shot by Harrison Ford, at least for now, but you may not be able to get into eBay.

The answer here is not to build a "better" AI and feed it with more and "better" surveillance signals. It's to find a different model to identify humans online, without endangering their privacy. That's not going to be a single solution invented by a company, that's an industry-wide adoption of new standards, new methods.

Likewise, you will never be able to buy a third-party AI that is testably pure of heart. To tell the truth, you'll never be able to build one yourself, at least not if you've got a big enough team or a corporate culture where internal fraud can happen. That's a team of two or more, and any workable corporate culture yet invented.

That's OK, once you stop looking for that particular unicorn. We can't theoretically verify non-trivial computing systems of any kind. When we have to use computers where failure is not an option, like flying aircraft or exploring space, we use multiple independent systems and majority voting.

If it seems that building a grand scheme on the back of the "perfect" black box works as badly as designing a human society on the model of the perfectly rational human, congratulations. Handling the complexities of real world data at real world scale means accepting that any system is fallible in ways that can't be patched or programmed out of. We're not at the point where AI engineering is edging into AI psychology, but it's coming.

Meanwhile, there's no need to give up on your AI-powered financial fraud detection. Buy three AIs from three different companies. Use them to check each other. If one goes wonky, use the other two until you can replace the first.

Can't afford three AIs? You don't have a workable business model. At least AI is very good at proving that. ®

More about


Send us news

Other stories you might like