This article is more than 1 year old

Driver in Uber's self-driving car death goes on trial, says she feels 'betrayed'

Plus: Clearview slapped with €20m from Italy's data regulator for scraping selfies, and more

In brief The name Rafaela Vasquez may not immediately be recogniseable, but the accident that ties her to the first-ever fatal self-driving car crash accident will be.

Vasquez was the driver when one of Uber's autonomous test cars crashed into a woman walking her bike across the road at night in March 2018. Now nearly three years later, she is due to go on trial for negligent homicide, denies wrongdoing, and has spoken out for the first time.

"I feel betrayed in a way," she told Wired. "At first everybody was all on my side, even the chief of police. Now it's the opposite. It was literally, one day I'm fine and next day I'm the villain. It's very – it's isolating."

While Uber has escaped any criminal liability, Vasquez was charged by prosecutors in Arizona. The blame has been pinned squarely on the driver when videos of her showed she was distracted before the collision. Her defense team will argue she was checking Slack messages from Uber in her work phone at the time, while prosecutors will say she was watching an episode of reality show The Voice on her personal handset.

Whatever the outcome of the case, Vasquez's side of the story is a tragic, personal tale of some of the risks in pursuing a high-tech, automated future.

Clearview faces more fines, this time in Italy

Clearview AI was fined €20m ($21.8m) by Italy's data protection watchdog for unlawfully scraping selfies from local citizens for its controversial facial recognition software, violating Europe's General Data Protection Regulation (GDPR) rules.

"The findings revealed that the personal data held by the company, including biometric and geolocation data, are processed illegally, without an adequate legal basis, which certainly cannot be the legitimate interest of the American company," the Garante per la Protezione dei Dati Personali said in a statement this week.

Clearview's CEO Hoan Thon-That admitted to downloading billions of photos from social media sites like Facebook or Instagram. The company uses this to build a database so its facial recognition algorithm can try and find matches to identify people from CCTV or a photograph.

Thon-That told TechCrunch that Clearview did not operate in Italy, and that all the data it scraped from the internet is public. It will be difficult for Italy's data security agency to force the company to pay up, but the decision could deter Italian companies from using Clearview's software.

Can AI make critical military decisions?

US government defense research lab, DARPA, has announced a new program looking at how algorithms could help make "critical decisions" during military operations.

Imagine a scenario where two soldiers in combat can't quite agree on how to proceed with a particular task, a machine could help them with their conundrum so they can work together. DARPA is interested in developing these types of AI algorithms in a project named "In the Moment" (ITM).

"ITM is different from typical AI development approaches that require human agreement on the right outcomes," said Matt Turek, ITM program manager. "The lack of a right answer in difficult scenarios prevents us from using conventional AI evaluation techniques, which implicitly requires human agreement to create ground-truth data."

ITM is going to run for 3.5 years, during which researchers will identify what types of problems its decision-making algorithms can tackle, and collect a lot of data on how humans might make those decisions themselves. Researchers will then create a "quantitative alignment score" to see how far away from the average its algorithms answers are from the human responses.

Finally, the agency will consult legal counsel and experts in ethics to see how these systems can be used practically in the real world. ®

More about

TIP US OFF

Send us news


Other stories you might like