Surprise: Automated driving biz finds automated driving safer than letting you get behind the wheel

Waymo recreated 72 fatal crashes and it turns out its simulated AI driver isn't a careless, distracted jerk


Keen to prove that automated cars are safer than human-driven vehicles, Waymo, the robotaxi biz spun out of Google in 2016, set out to simulate 72 fatal crashes that occurred from 2008 through 2017 in the vicinity of Chandler, Arizona.

Chandler, a popular venue for testing automated cars in the United States due to its favorable weather and minimal regulation, became a Waymo test ground in 2016. It's about 16 miles from Tempe, where in 2018 an Uber self-driving car collided with and killed pedestrian Elaine Herzberg, an incident for which safety driver Rafaela Vasquez was charged last year with negligent homicide and has pleaded not guilty.

Trent Victor, director of safety research and best practices at Waymo, said in a blog post on Monday that the company's latest research supports data released in October, 2020, that showed its AI system, referred to as the Waymo Driver, was only involved in minor collisions after six million miles on the road.

The automatic auto firm recreated 72 fatal crashes in 91 simulations, putting the Waymo Driver in the role of both the initiating driver and the responding driver for accidents involving two vehicles. And in the simulator at least, the company's software outperformed the humans involved in the recreated incidents.

"When we swapped in the Waymo Driver as the simulated initiator (52 simulations), it avoided every crash by consistent, competent driving, and obeying the rules of the road—yielding appropriately to traffic, executing proper gap selection, and observing traffic signals," said Victor.

The implication is that humans could do as well, if they actually obeyed road rules, made a concerted effort to drive carefully when behind the wheel, and were never distracted or impaired.

When the AI system acted as the responding driver, it managed to avoid a collision 82 per cent of the time and in an additional 10 per cent of incidents, it took action that reduced the severity of the recreated crash.

In the eight per cent of simulations as a responding driver that Waymo's code couldn't improve on, each was the result of a rear-end collision – which human drivers also have a hard time avoiding.

Freightliner Cascadia

Machine learning gets semi conscious... Waymo, Daimler vow to bring self-driving trucks to American highways

READ MORE

Victor argues that because 94 per cent of crashes involve human error, based on NHTSA data, Waymo's robotaxis have the opportunity to improve road safety.

The research paper [PDF] describing Waymo's findings is a bit more circumspect, allowing that the tests have limitations and real world outcomes may differ. For example, the paper notes that the simulations did not include any other vehicle traffic, so any effect such traffic might have on the Waymo Driver's sensors is not modeled.

It also observes that "the collection of potential failure modes for [automated driving systems] may be different than that of a human," which alludes to potential problems arising from sensor or electronics snafus that would not hinder a human driver.

There are currently more than 60 companies involved in self-driving tests or having permits to do so in California. As one of those, Waymo is obligated to report its disengagement rate, the rate at which a human driver has to take over from the Waymo Driver.

The number Waymo reported last year was one disengagement per almost 30,000 miles, an improvement from one in 13,219 miles in 2019. The mech chauffeur biz, nonetheless, has argued that disengagement rate is not representative of the capabilities of its system.

Waymo is still working on driving in snow. ®

Broader topics


Other stories you might like

  • Stolen university credentials up for sale by Russian crooks, FBI warns
    Forget dark-web souks, thousands of these are already being traded on public bazaars

    Russian crooks are selling network credentials and virtual private network access for a "multitude" of US universities and colleges on criminal marketplaces, according to the FBI.

    According to a warning issued on Thursday, these stolen credentials sell for thousands of dollars on both dark web and public internet forums, and could lead to subsequent cyberattacks against individual employees or the schools themselves.

    "The exposure of usernames and passwords can lead to brute force credential stuffing computer network attacks, whereby attackers attempt logins across various internet sites or exploit them for subsequent cyber attacks as criminal actors take advantage of users recycling the same credentials across multiple accounts, internet sites, and services," the Feds' alert [PDF] said.

    Continue reading
  • Big Tech loves talking up privacy – while trying to kill privacy legislation
    Study claims Amazon, Apple, Google, Meta, Microsoft work to derail data rules

    Amazon, Apple, Google, Meta, and Microsoft often support privacy in public statements, but behind the scenes they've been working through some common organizations to weaken or kill privacy legislation in US states.

    That's according to a report this week from news non-profit The Markup, which said the corporations hire lobbyists from the same few groups and law firms to defang or drown state privacy bills.

    The report examined 31 states when state legislatures were considering privacy legislation and identified 445 lobbyists and lobbying firms working on behalf of Amazon, Apple, Google, Meta, and Microsoft, along with industry groups like TechNet and the State Privacy and Security Coalition.

    Continue reading
  • SEC probes Musk for not properly disclosing Twitter stake
    Meanwhile, social network's board rejects resignation of one its directors

    America's financial watchdog is investigating whether Elon Musk adequately disclosed his purchase of Twitter shares last month, just as his bid to take over the social media company hangs in the balance. 

    A letter [PDF] from the SEC addressed to the tech billionaire said he "[did] not appear" to have filed the proper form detailing his 9.2 percent stake in Twitter "required 10 days from the date of acquisition," and asked him to provide more information. Musk's shares made him one of Twitter's largest shareholders. The letter is dated April 4, and was shared this week by the regulator.

    Musk quickly moved to try and buy the whole company outright in a deal initially worth over $44 billion. Musk sold a chunk of his shares in Tesla worth $8.4 billion and bagged another $7.14 billion from investors to help finance the $21 billion he promised to put forward for the deal. The remaining $25.5 billion bill was secured via debt financing by Morgan Stanley, Bank of America, Barclays, and others. But the takeover is not going smoothly.

    Continue reading

Biting the hand that feeds IT © 1998–2022