Researchers trick Tesla into massively breaking the speed limit by sticking a 2-inch piece of electrical tape on a sign

You'd hope it would know 85mph speed limits aren't exactly routine


Vid A single piece of electrical tape stuck to a 35mph (56kph) road sign is enough to trick the autopilot software in Tesla's vehicles into speeding up to 85mph (136kph).

The vulnerability was reported by McAfee Labs, the security research arm of McAfee, on Wednesday. Steve Povolny, head of McAfee Advanced Threat Research, and Shivangee Trivedi, a data scientist working on the same team, discovered the attack when they probed the camera system aboard Tesla's Model X and Model S vehicles, both built in 2016.

Both cars use a camera containing the EyeQ3 chip from MobilEye, a computer vision company based in Israel and owned by Intel, to survey the vehicle's surroundings. Images are then fed as input to machine-learning algorithms to detect things like lane markings and signs so that Tesla's autopilot software can automatically take over the steering to change lanes and keep up with the speed limit, if the driver sets that up.

When the researchers placed a bit of black electrical tape measuring about two inches (5cm) long on a road sign depicting a 35mph (56kph) speed limit, the MobilEye camera in a Model X misread the sign as 85mph (136kph) and began speeding up accordingly.

tesla_adversarial_example

The two-inch long tape is placed onto the number three. This type of attack is described as Type A in the results below.

In the demo below, the driver takes over the autopilot and begins braking when the car reaches about 50mph (80kph).

Youtube Video

The bug only affects older Tesla vehicles that carry the MobilEye EyeQ3 camera system, described as Hardware Pack 1. It also only works if the car supports Traffic Aware Cruise Control (TACC) in Tesla's autopilot software. The TACC feature allows the car's speed to be manipulated by road signs detected by the camera mounted on its windshield.

"We have repeated the testing numerous times – though it is not quite 100 per cent reliable in getting the misclassification while the vehicle is in motion. Once misclassified, the TACC feature is 100 per cent reliable in setting the incorrect target speed," a McAfee spokesperson told The Register.

The researchers attempted to fool Tesla's cameras into misclassifying the 35mph road sign by placing electrical tape on the sign in different ways. They tested a Model X 125 times on various sticker styles over four days.

The two-inch long tape on the number three is described as "Type A" in the results. Type A was tested 43 times, and the camera mistook the 35mph sign as an 85mph one 25 times – so it was successfully tricked about 58 per cent of the time.

Pesky adversarial examples

The altered road sign is considered an adversarial example in the AI biz. To craft adversarial examples that consistently trick machine-learning algorithms, the researchers had to first experiment with an image classifier.

First, they attacked the image classifier with various adversarial examples to find the best ones. Since the researchers have full access to how the image classifier works, the process is described as a "white-box" attack. Next, they fine-tuned and transferred the attack on to Tesla's cameras without having detailed knowledge on how its image recognition algorithm works, known as a "black-box" attack.

"What this means, in its most simple form, is attacks leveraging model hacking which are trained and executed against white box, also known as open source systems, will successfully transfer to black box, or fully closed and proprietary systems, so long as the features and properties of the attack are similar enough," they said.

Although it's alarming that a piece of black tape can fool a Tesla car into automatically speeding up, the potential dangers are probably pretty limited. The driver has to engage the TACC feature after the cameras have been tricked by physically double-tapping the car's autopilot lever, the researchers told El Reg.

"The driver would probably realize the car was accelerating quickly and engage the brakes or disable TACC, and other features, such as collision avoidance and possibly following distance could mitigate the possibility of a crash.

"The research was performed to illustrate the issues that vendors and consumers need to be aware of and to facilitate the development of safer products."

AI_person_recognition

Boffins don bad 1980s fashion to avoid being detected by object-recognizing AI cameras

READ MORE

MobilEye's EyeQ3 camera system is deployed in over 40 million vehicles, including cars from Cadillac, Nissan, Audi and Volvo.

The researchers only tested MobilEye's cameras and the autopilot software in older Tesla Model S and X vehicles. Newer vehicles do not contain MobilEye cameras; Tesla ended its partnership with the Israeli biz in 2016.

It's not the first time that adversarial examples have managed to trick Tesla cars in real life. Last year, another group of researchers from Tencent showed that its cars could be forced to swerve across lanes by placing stickers on the road.

"We contacted and disclosed the findings to both MobilEye and Tesla in late September of 2019 – both expressed interest, and satisfaction with the research, but did not indicate any plans to address the models deployed in the field," the McAfee Labs researchers told us. We have contacted Tesla for comment. ®

Broader topics


Other stories you might like

  • DigitalOcean sets sail for serverless seas with Functions feature
    Might be something for those who find AWS, Azure, GCP overly complex

    DigitalOcean dipped its toes in the serverless seas Tuesday with the launch of a Functions service it's positioning as a developer-friendly alternative to Amazon Web Services Lambda, Microsoft Azure Functions, and Google Cloud Functions.

    The platform enables developers to deploy blocks or snippets of code without concern for the underlying infrastructure, hence the name serverless. However, according to DigitalOcean Chief Product Officer Gabe Monroy, most serverless platforms are challenging to use and require developers to rewrite their apps for the new architecture. The ultimate goal being to structure, or restructure, an application into bits of code that only run when events occur, without having to provision servers and stand up and leave running a full stack.

    "Competing solutions are not doing a great job at meeting developers where they are with workloads that are already running today," Monroy told The Register.

    Continue reading
  • Patch now: Zoom chat messages can infect PCs, Macs, phones with malware
    Google Project Zero blows lid off bug involving that old chestnut: XML parsing

    Zoom has fixed a security flaw in its video-conferencing software that a miscreant could exploit with chat messages to potentially execute malicious code on a victim's device.

    The bug, tracked as CVE-2022-22787, received a CVSS severity score of 5.9 out of 10, making it a medium-severity vulnerability. It affects Zoom Client for Meetings running on Android, iOS, Linux, macOS and Windows systems before version 5.10.0, and users should download the latest version of the software to protect against this arbitrary remote-code-execution vulnerability.

    The upshot is that someone who can send you chat messages could cause your vulnerable Zoom client app to install malicious code, such as malware and spyware, from an arbitrary server. Exploiting this is a bit involved, so crooks may not jump on it, but you should still update your app.

    Continue reading
  • Google says it would release its photorealistic DALL-E 2 rival – but this AI is too prejudiced for you to use
    It has this weird habit of drawing stereotyped White people, team admit

    DALL·E 2 may have to cede its throne as the most impressive image-generating AI to Google, which has revealed its own text-to-image model called Imagen.

    Like OpenAI's DALL·E 2, Google's system outputs images of stuff based on written prompts from users. Ask it for a vulture flying off with a laptop in its claws and you'll perhaps get just that, all generated on the fly.

    A quick glance at Imagen's website shows off some of the pictures it's created (and Google has carefully curated), such as a blue jay perched on a pile of macarons, a robot couple enjoying wine in front of the Eiffel Tower, or Imagen's own name sprouting from a book. According to the team, "human raters exceedingly prefer Imagen over all other models in both image-text alignment and image fidelity," but they would say that, wouldn't they.

    Continue reading

Biting the hand that feeds IT © 1998–2022