Nice 'AI solution' you've bought yourself there. Not deploying it direct to users, right? Here's why maybe you shouldn't

Top tip: Ask your vendor what it plans to do about adversarial examples


RSA It’s trivial to trick neural networks into making completely incorrect decisions, just by feeding them dodgy input data, and there are no foolproof ways to avoid this, a Googler warned today.

Tech vendors are desperately trying to cash in on the AI hype by offering bleeding-edge “machine-learning solutions” to all your technical woes. However, you should think twice before deploying neural networks in production, Nicholas Carlini, a research scientist at Google Brain, told an audience at this year’s RSA Conference in San Francisco on Wednesday.

Essentially, we're told, it's virtually impossible to prevent today's artificially intelligent systems from being tricked, by carefully crafted input data, into making the wrong decisions. That means these systems could be subverted by malicious customers or rogue employees who are allowed to pass information straight into these wonderful “machine-learning solutions."

To be safe, the input data should be thoroughly sanitized, or the AI software should be banned from handling user-supplied information directly, or the vendor admits its technology is not really using a trained neural network after all.

Under the hood

The problem is that machine-learning systems – proper ones, not dressed-up heuristic algorithms – just aren't very robust. You’ve probably heard about models being hoodwinked into declaring bananas are toasters or turtles are guns by altering a few pixels in a photograph. At first, these tiny tricks may not seem all that useful in the real world, however, there are more worrying examples. A clear road sign with the word "STOP" could be mistaken by software for "45 MPH" using small strips of white and black tape, adding just enough extra detail to lure the neural network to the wrong conclusion.

It doesn’t just affect computer vision systems. Carlini demonstrated on stage how a piece of music could hide voice commands interpreted by a nearby digital assistant. These commands would be inaudible to human ears, yet picked up by Google Assistant on an Android phone. During the demo, a snippet of one of Bach’s cello suites peppered with white noise prompted his smartphone to automatically navigate to the Facebook login page. “What if these attacks could be embedded in Youtube videos? What if the command was to send me your most recent email?" he asked.

Example of a spelling mistake

Potato, potato. Toma6to, I'm going to kill you... How a typo can turn an AI translator against us

READ MORE

Crafting these adversarial inputs is a simple trial and error task. “It’s not that hard to generate these types of attacks, and if you know a little bit of calculus, you can do it much better," Carlini said. "You basically calculate what the derivative of the image seen by the neural network is to some loss function,” he explained, with regards to image classifiers.

In other words, you calculate to what degree should you change the input data to maximize the chances the neural network spits out a wrong answer. You're seeking an answer to: what is the smallest perturbation that can be made for the largest effect?

These fudged inputs can be thrown repeatedly at a model to attack it until one sticks, and what’s worse is that there’s “no real answer yet” on how to fend them off, said Carlini. So, neural networks are easy to attack and difficult to defend. Great.

Solutions for AI solutions

He recommended “adversarial training” as the “best method we know of today” to guard models from possible threats. Developers should attack their own systems by generating adversarial examples and retrain their neural networks with these examples so that they are more robust to these types of attacks.

For example, if you have built an AI that can tell photos of beer bottles from wine glasses, you can take some photos of beer bottles, tweak a few pixels here and there until they are misidentified by the software as wine glasses, and then run these vandalized examples through the training process again, instructing the model that those are still beer bottles. Thus, someone in the future attempting the same against your AI will be caught out.

In the red corner: Malware-breeding AI. And in the blue corner: The AI trying to stop it

READ MORE

It's not foolproof, though. For every adversarial example you generate and feed in to your network to retrain it, an attacker can come up with a slightly more complex or detailed one, and hoodwink it. You're locked in an arms race to generate more and more modified training images, until you eventually wreck the accuracy of your model.

This process can also extend the time it takes to train your neural network, making it ten to 50 times slower, and, as we said, it can decrease the accuracy of image recognition models.

“Machine learning isn’t the answer to all of your problems. You need to ask yourself, ‘is it going to give me new problems that I didn’t have before?'” Carlini said.

No one really understands why machine-learning code is so brittle. In another study, a group of researchers showed that most computer vision models fail to recognize objects by their shape and seem to focus on texture instead. What’s more interesting, however, is that after the neural networks were retrained to fight against the texture bias, they were still susceptible to adversarial examples. ®

Broader topics


Other stories you might like

  • US Copyright Office sued for denying AI model authorship of digital image
    What do we want? Robot rights! When do we want them? 01001110 01101111 01110111!

    The US Copyright Office and its director Shira Perlmutter have been sued for rejecting one man's request to register an AI model as the author of an image generated by the software.

    You guessed correct: Stephen Thaler is back. He said the digital artwork, depicting railway tracks and a tunnel in a wall surrounded by multi-colored, pixelated foliage, was produced by machine-learning software he developed. The author of the image, titled A Recent Entrance to Paradise, should be registered to his system, Creativity Machine, and he should be recognized as the owner of the copyrighted work, he argued.

    (Owner and author are two separate things, at least in US law: someone who creates material is the author, and they can let someone else own it.)

    Continue reading
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members
    Bot was booted for being bothersome

    A prankster researcher has trained an AI chatbot on over 134 million posts to notoriously freewheeling internet forum 4chan, then set it live on the site before it was swiftly banned.

    Yannic Kilcher, an AI researcher who posts some of his work to YouTube, called his creation "GPT-4chan" and described it as "the worst AI ever". He trained GPT-J 6B, an open source language model, on a dataset containing 3.5 years' worth of posts scraped from 4chan's imageboard. Kilcher then developed a chatbot that processed 4chan posts as inputs and generated text outputs, automatically commenting in numerous threads.

    Netizens quickly noticed a 4chan account was posting suspiciously frequently, and began speculating whether it was a bot.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • Police lab wants your happy childhood pictures to train AI to detect child abuse
    Like the Hotdog, Not Hotdog app but more Kidnapped, Not Kidnapped

    Updated Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs.

    Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively innocuous picture like a child taking a bath. The crowdsourcing campaign, dubbed My Pictures Matter, is open to those aged 18 and above, who can consent to having their photographs be used for research purposes.

    All the images will be amassed into a dataset managed by Monash academics in an attempt to train an AI model to tell the difference between a minor in a normal environment and an exploitative, unsafe situation. The software could, in theory, help law enforcement better automatically and rapidly pinpoint child sex abuse material (aka CSAM) in among thousands upon thousands of photographs under investigation, avoiding having human analysts inspect every single snap.

    Continue reading

Biting the hand that feeds IT © 1998–2022