Cars in driver-assist mode hit a third of cyclists, all oncoming cars in tests

Still think we're ready for that autonomous future?


Autonomous cars may be further away than believed. Testing of three leading systems found they hit a third of cyclists, and failed to avoid any oncoming cars.

The tests [PDF] performed by the American Automobile Association (AAA) looked at three vehicles: a 2021 Hyundai Santa Fe with Highway Driving Assist; a 2021 Subaru Forester with EyeSight; and a 2020 Tesla Model 3 with Autopilot.

According to the AAA, all three systems represent the second of five autonomous driving levels, which require drivers to maintain alertness at all times to seize control from the computer when needed. There are no semi-autonomous cars generally available to the public that are able to operate above level two.

The AAA reviewed multiple scenarios: how active driving assist (ADA) systems respond to slow-moving cars or cyclists ahead of them in the same lane; how they respond to oncoming vehicles crossing the center line; and how they respond to cyclists crossing their lane of travel. 

A semi-autonomous car hits a test dummy on a bicycle

A semi-autonomous car hits a test dummy on a bicycle during testing ... Source: AAA

The first two scenarios evaluated adaptive cruise control (ACC), which decelerates or brakes a vehicle in response to slower or stopped objects ahead. All three vehicles detected their vehicle and cyclist targets and were able to match speed or stop in response. 

The AAA said that their tests of ACC systems were encouraging and supportive of "previous AAA research concluding that the ACC component of ADA systems are well-developed and perform according to expectations for typical closed-course scenarios and naturalistic driving environments," the report reads.

Emergency ADA response lacking

Controlled deceleration when faced with predictable scenarios is one thing, but when faced with emergency situations the response was far worse.

In tests involving an oncoming car passing into the lane of the ADA-enabled vehicle, only one, the Tesla Model 3, detected the oncoming car and slowed the vehicle, but still hit it.

To make matters worse, the AAA said the head-on test was performed at "unrealistically low vehicle speeds" in which the ADA vehicle was moving at 15mph (24kph), and the target vehicle at 25mph (40kph).

Were the tests done "at higher speeds characteristic of rural two-lane highways, it is unlikely that evaluated ADA systems would provide meaningful mitigation in the absence of driver intervention," AAA wrote in the report.

Greg Brannon, AAA's director of automotive engineering, said that the ACC tests were encouraging, but the head-on test should be enough to give drivers pause: "A head-on crash is the deadliest kind, and these systems should be optimized for the situations where they can help the most." 

The response to collisions with cyclists was a bit more encouraging, but not by much. Instead of all three vehicles smacking into the cyclist without slowing, only the Subaru failed to detect and struck the cyclist in each of the five test runs.

Drivers tell us they expect their current driving assistance technology to perform safely all the time

The driver of the ADA-enabled vehicle reached a predetermined speed in each scenario, engaged the system, and allowed the vehicle to hit, or avoid, its target without making any action to slow, stop or swerve. The cyclist and vehicle targets were both lightweight and designed to be harmless to the test vehicle and driver. 

From its work, the AAA concluded that ADA systems are incapable of operating without continual driver supervision, contrary to what it said is misleading marketing material on the part of auto manufacturers. The AAA said that a 2018 survey it performed found that 40 percent of consumers believed names like "Autopilot" indicated the vehicle was capable of fully autonomous driving.

The AAA said automakers working on self-driving tech need to spend more time focusing on edge-case emergency scenarios, as well as employing active driver monitoring systems to ensure driver attentiveness.

On the one hand, driver-assistance software is just that: a tool for attentive human drivers, and should not be used or relied upon as a true self-driving system; on the other hand, poor performance of level-two driving systems had better not be indicative of the safety performance of higher levels. And complicating this issue is some of the marketing around driver-assistance technology already painting it as a no-hands, competent solution.

"Drivers tell us they expect their current driving assistance technology to perform safely all the time," Brannon said. "But unfortunately, our testing demonstrates spotty performance is the norm rather than the exception." ®

Broader topics


Other stories you might like

  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022