Sponge code borks square AI brains, sucking up compute power in novel attack against machine-learning systems

The inner machinations of my mind are an enigma


A novel adversarial attack that can jam machine-learning systems with dodgy inputs to increase processing time and cause mischief or even physical harm has been mooted.

These attacks, known as sponge examples, force the hardware running the AI model to consume more power, forcing it to behave more sluggishly. They behave in a somewhat similar manner to denial-of-service attacks, overwhelming applications and disrupting the flow of data.

Sponge examples are particularly effective against software that needs to run in real time. For example, delaying the response time in image recognition software for autonomous vehicles is potentially physically dangerous, whereas slowing down a text generation model is less of a headache. Essentially, sponge examples are malicious inputs that drive up the latency in devices performing inference.

A group of researchers from the UK's University of Cambridge, and the University of Toronto and the Vector Institute in Canada, demonstrated how sponge examples could derail computer-vision and natural-language processing (NLP) models accelerated by various chips, including Intel’s Xeon E5-2620 V4 CPU and Nvidia’s GeForce 1080 Ti GPU, as well as an ASIC simulator.

Sponge attacks are also effective on other machine-learning accelerators like Google’s custom TPU too, we’re told. They managed to slow down NLP systems by a factor of two to two hundred, and increased processing times in computer-vision models by about ten per cent – not something you want in a time-critical self-driving vehicle.

Here's a really simple example. Imagine a Q&A model designed to test a computer’s ability to understand text. Slipping it a question that contains a typo like “explsinable" instead of “explainable” can confuse the system and slow it down, Ilia Shumailov, co-author of the study [arXiv PDF] and a PhD candidate at Cambridge, told The Register.

Example of a cat scaled down and turning into a dog

FYI: You can trick image-recog AI into, say, mixing up cats and dogs – by abusing scaling code to poison training data

READ MORE

“An NLP model will try its best to understand the word - 'explainable' is represented with a single token because it is a known word; 'explsinable' is unknown but should still be processed, but could take several times as long if you do a simple nearest-neighbor search on the assumption it’s a typo. However, a common NLP optimization is for it to be broken down into three tokens 'expl,' 'sin,' 'able' which could result in the system taking a hundred times as long to answer it.” Something as simple as “explsinable,” therefore, can act a bit like a sponge example.

Spawning sponge examples

To pull off these kinds of shenanigans, miscreants have to spend time generating sponge examples. The trick is to use genetic algorithms to spawn a set of random inputs and mutate them to create a diverse dataset capable of slowing down a neural network, whether it's an image for an object recognition model or a snippet of text for a machine translation system.

These generated inputs are then given to a dummy neural network to process. The energy consumed by the hardware during that process is estimated using software tools that analyze a chip’s performance.

The top 10 per cent of inputs that force the chip to slurp more computational power are kept and mutated to create a second batch of “children” inputs that are more likely to be effective in attacks carried out on real models. Attackers don’t need to have full access to the neural network they’re attempting to thwart. Sponge attacks work on similar models and across different hardware.

“We find that if a Sponge example works on one platform, it often also works on the others. This isn't surprising because many hardware platforms use similar heuristics to make computing more time or energy efficient,” Shumailov said.

"For example, the most time or energy expensive operation in modern hardware is memory access, so as long as the attackers can increase the memory bandwidth, they can slow down the application and make it consume more energy."

Sponge examples are not quite the same as adversarial examples. The goal is to force a system to perform more slowly rather than trying to trick it into an incorrect classification. Both are considered adversarial attacks and affect machine-learning models, however.

There is a simple method to preventing sponge attacks. “We propose that prior to the deployment of a model, natural examples get profiled to measure the time or energy cost of inference. The defender can then fix a cut-off threshold. This way, the maximum consumption of energy per inference run is controlled and sponge examples will have a bounded impact on availability,” the academics wrote in their study.

In other words, sponge examples can be combated by stopping a model processing a specific input if it consumes too much energy.

That may seem like an easy way to fend neural networks from sponge examples, but it’s unlikely that such a defense will be implemented, according to Shumailov. “You can make sponge attacks irrelevant by sizing your system so that it will work quickly enough even if an adversary forces it into worst-case performance. In that sense it’s easy.

“But often you won’t be able to afford that. People are spending humongous sums of money on accelerator chips to make machine learning run faster – yet these increase the distance between average-case and worst-case performance, and thus vulnerability. Firms are spending billions of dollars making their systems more vulnerable to sponge attacks, just as they spent billions of dollars on superscalar CPUs that made their server farms more vulnerable to Spectre and Meltdown attacks.”

Now, the boffins are planning to find sponge examples in AI systems that have been deployed in the real world.

“The sponge examples found by our attacks can be used in a targeted way to cause an embedded system to fall short of its performance goal. In the case of a machine-vision system in an autonomous vehicle, this might enable an attacker to crash the vehicle; in the case of a missile guided by a neural network target tracker, a sponge example countermeasure might break the tracking lock. Adversarial worst-case performance must, in such applications, be tested carefully by system engineers,” the paper concluded. ®

Broader topics


Other stories you might like

  • Experts: AI should be recognized as inventors in patent law
    Plus: Police release deepfake of murdered teen in cold case, and more

    In-brief Governments around the world should pass intellectual property laws that grant rights to AI systems, two academics at the University of New South Wales in Australia argued.

    Alexandra George, and Toby Walsh, professors of law and AI, respectively, believe failing to recognize machines as inventors could have long-lasting impacts on economies and societies. 

    "If courts and governments decide that AI-made inventions cannot be patented, the implications could be huge," they wrote in a comment article published in Nature. "Funders and businesses would be less incentivized to pursue useful research using AI inventors when a return on their investment could be limited. Society could miss out on the development of worthwhile and life-saving inventions."

    Continue reading
  • Declassified and released: More secret files on US govt's emergency doomsday powers
    Nuke incoming? Quick break out the plans for rationing, censorship, property seizures, and more

    More papers describing the orders and messages the US President can issue in the event of apocalyptic crises, such as a devastating nuclear attack, have been declassified and released for all to see.

    These government files are part of a larger collection of records that discuss the nature, reach, and use of secret Presidential Emergency Action Documents: these are executive orders, announcements, and statements to Congress that are all ready to sign and send out as soon as a doomsday scenario occurs. PEADs are supposed to give America's commander-in-chief immediate extraordinary powers to overcome extraordinary events.

    PEADs have never been declassified or revealed before. They remain hush-hush, and their exact details are not publicly known.

    Continue reading
  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading

Biting the hand that feeds IT © 1998–2022