Your AI can't tell you it's lying if it thinks it's telling the truth. That's a problem

Who knows what evil lurks in the heart of ML? Er, nobody


Opinion Machine learning's abiding weakness is verification. Is your AI telling the truth? How can you tell?

This problem isn't unique to ML. It plagues chip design, bathroom scales, and prime ministers. Still, with so many new business models depending on AI's promise to bring the holy grail of scale to real-world data analysis, this lack of testability has new economic consequences.

The basic mechanisms of machine learning are sound, or at least statistically reliable. Within the parameters of its training data, an ML process will deliver what the underlying mathematics promise. If you understand the limits, you can trust it.

But what if there's a backdoor, a fraudulent tweak of that training data set which will trigger misbehavior? What if there's a particular quirk in someone's loan request – submitted at exactly 00:45 on the 5th and the amount requested checksums to 7 – that triggers automatic acceptance, regardless of risk?

Like an innocent assassin unaware they'd had a kill word implanted under hypnosis, your AI would behave impeccably until the bad guys wanted it otherwise.

Intuitively, we know that's a possibility. Now it has been shown mathematically that not only can this happen, researchers say, it's not theoretically detectable. An AI backdoor exploit engineered through training is not only just as much a problem as a traditionally coded backdoor, it's not amenable to inspection or version-on-version comparison or, indeed, anything. As far as the AI's concerned, everything is working perfectly, Harry Palmer could never confess to wanting to shoot JFK, he had no idea he did.

The mitigations suggested by researchers aren't very practical. Complete transparency of training data and process between AI company and client is a nice idea, except that the training data is the company's crown jewels – and if they're fraudulent, how does it help?

At this point, we run into another much more general tech industry weakness, the idea that you can always engineer a singular solution to a particular problem. Pay the man, Janet, and let's go home. That doesn't work here; computer says no is one thing, mathematics says no quite another. If we carry on assuming that there'll be a fix akin to a patch, some new function that makes future AIs resistant to this class of fraud, we will be defrauded.

Conversely, the industry does genuinely advance once fundamental flaws are admitted and accepted, and the ecosystem itself changes in recognition.

AI has an ongoing history of not working as well as we thought, and it's not just this or that project. For example, an entire sub-industry has evolved to prove you are not a robot. Using its own trained robots to silently watch you as you move around online. If these machine monitors deem you too robotic, they spring a Voight-Kampff test on you in the guise of a Completely Automated Public Turing test to tell Computers and Humans Apart – more widely known, and loathed, as a Captcha. You then have to pass a quiz designed to filter out automata. How undignified.

Do they work? It's still economically viable for the bad guys to carry on producing untold millions of programmatic fraudsters intent on deceiving the advertising industry, so that's a no on the false positives. And it's still common to be bounced from a login because your eyes aren't good enough, or the question too ambiguous, or the feature you relied on has been taken away. Not being able to prove you are not a robot doesn't get you shot by Harrison Ford, at least for now, but you may not be able to get into eBay.

The answer here is not to build a "better" AI and feed it with more and "better" surveillance signals. It's to find a different model to identify humans online, without endangering their privacy. That's not going to be a single solution invented by a company, that's an industry-wide adoption of new standards, new methods.

Likewise, you will never be able to buy a third-party AI that is testably pure of heart. To tell the truth, you'll never be able to build one yourself, at least not if you've got a big enough team or a corporate culture where internal fraud can happen. That's a team of two or more, and any workable corporate culture yet invented.

That's OK, once you stop looking for that particular unicorn. We can't theoretically verify non-trivial computing systems of any kind. When we have to use computers where failure is not an option, like flying aircraft or exploring space, we use multiple independent systems and majority voting.

If it seems that building a grand scheme on the back of the "perfect" black box works as badly as designing a human society on the model of the perfectly rational human, congratulations. Handling the complexities of real world data at real world scale means accepting that any system is fallible in ways that can't be patched or programmed out of. We're not at the point where AI engineering is edging into AI psychology, but it's coming.

Meanwhile, there's no need to give up on your AI-powered financial fraud detection. Buy three AIs from three different companies. Use them to check each other. If one goes wonky, use the other two until you can replace the first.

Can't afford three AIs? You don't have a workable business model. At least AI is very good at proving that. ®

Broader topics


Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading
  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022