Another AI attack, this time against 'black box' machine learning

The difference between George Clooney and Dustin Hoffman? Just a couple of pixels


Would you like to join the merry band of researchers breaking machine learning models? A trio of German researchers has published a tool designed to make it easier to craft adversarial models when you're attacking a “black box”.

Unlike adversarial models that attack AIs “from the inside”, attacks developed for black boxes could be used against closed system like autonomous cars, security (facial recognition, for example), or speech recognition (Alexa or Cortana).

The tool, called Foolbox, is currently under review for presentation at next year's International Conference on Learning Representations (kicking off at the end of April).

Wieland Brendel, Jonas Rauber and Matthias Bethge of the Eberhard Karls University Tubingen, Germany explained at arXiv that Foolbox is a “decision-based” attack called a boundary attack which “starts from a large adversarial perturbation and then seeks to reduce the perturbation while staying adversarial”.

Foolbox defeating celebrity ID

Foolbox tested against Clarifai's black-box AI

“Its basic operating principle – starting from a large perturbation and successively reducing it – inverts the logic of essentially all previous adversarial attacks. Besides being surprisingly simple, the boundary attack is also extremely flexible”, they wrote.

For example, “transfer-based attacks” have to be tested against the same training data as the models they're attacking, and need “cumbersome substitute models”.

Gradient-based attacks, the paper claimed, also need detailed knowledge about the target model, while score-based attacks need access to the target model's confidence scores.

The boundary attack, the paper said, only needs to see the final decision of a machine learning model – the class label it applies to an input, for example, or in a speech recognition model, the transcribed sentence.

Foolbox defeating logos

Foolbox tested against logos in the Clarifai black box

The researchers tested their attack using the Clarifai API, tricking it into mis-identifying celebrities and missing prominent logos. ®

Broader topics


Other stories you might like

  • AMD claims its GPUs beat Nvidia on performance per dollar
    * Terms, conditions, hardware specs and software may vary – a lot

    As a slowdown in PC sales brings down prices for graphics cards, AMD is hoping to win over the market's remaining buyers with a bold, new claim that its latest Radeon cards provide better performance for the dollar than Nvidia's most recent GeForce cards.

    In an image tweeted Monday by AMD's top gaming executive, the chip designer claims its lineup of Radeon RX 6000 cards provide better performance per dollar than competing ones from Nvidia, with all but two of the ten cards listed offering advantages in the double-digit percentages. AMD also claims to provide better performance for the power required by each card in all but two of the cards.

    Continue reading
  • Google opens the pod doors on Bay View campus
    A futuristic design won't make people want to come back – just ask Apple

    After nearly a decade of planning and five years of construction, Google is cutting the ribbon on its Bay View campus, the first that Google itself designed.

    The Bay View campus in Mountain View – slated to open this week – consists of two office buildings (one of which, Charleston East, is still under construction), 20 acres of open space, a 1,000-person event center and 240 short-term accommodations for Google employees. The search giant said the buildings at Bay View total 1.1 million square feet. For reference, that's less than half the size of Apple's spaceship. 

    The roofs on the two main buildings, which look like pavilions roofed in sails, were designed that way for a purpose: They're a network of 90,000 scale-like solar panels nicknamed "dragonscales" for their layout and shimmer. By scaling the tiles, Google said the design minimises damage from wind, rain and snow, and the sloped pavilion-like roof improves solar capture by adding additional curves in the roof. 

    Continue reading
  • Pentester pops open Tesla Model 3 using low-cost Bluetooth module
    Anything that uses proximity-based BLE is vulnerable, claim researchers

    Tesla Model 3 and Y owners, beware: the passive entry feature on your vehicle could potentially be hoodwinked by a relay attack, leading to the theft of the flash motor.

    Discovered and demonstrated by researchers at NCC Group, the technique involves relaying the Bluetooth Low Energy (BLE) signals from a smartphone that has been paired with a Tesla back to the vehicle. Far from simply unlocking the door, this hack lets a miscreant start the car and drive away, too.

    Essentially, what happens is this: the paired smartphone should be physically close by the Tesla to unlock it. NCC's technique involves one gadget near the paired phone, and another gadget near the car. The phone-side gadget relays signals from the phone to the car-side gadget, which forwards them to the vehicle to unlock and start it. This shouldn't normally happen because the phone and car are so far apart. The car has a defense mechanism – based on measuring transmission latency to detect that a paired device is too far away – that ideally prevents relayed signals from working, though this can be defeated by simply cutting the latency of the relay process.

    Continue reading

Biting the hand that feeds IT © 1998–2022