This article is more than 1 year old

Tesla employee: I was fired after sharing video of self-driving car crash

Plus: FTC orders model destruction

In brief Tesla reportedly fired an employee after he uploaded videos to YouTube critiquing the automaker's autonomous driving software.

John Bernal, an ex-Tesla operator working on the Autopilot platform, runs a YouTube channel under the username AI Addict. He has filmed and shared several videos demonstrating the capabilities of Tesla's still-in-development Full Self-Driving (FSD) product.

He claims he was axed by management in February after being told that he "broke Tesla policy" and that his YouTube channel was a "conflict of interest," according to CNBC. Bernal insists he never revealed confidential information, and that his reviews were always of FSD versions that had been released to public beta testers.

One of his videos shows him riding around Oakland, California, during which FSD made his car turn right in the wrong lane, almost swerve into oncoming traffic to avoid a cyclist, and drive poorly in other circumstances. Another one shows how FSD made his car crash into bollards in nearby San Jose. 

Tesla carefully controls its public image by giving FSD beta access to content makers who promote the software. One driver previously told The Register he couldn't talk to us about it as the system is still not generally available. Tesla also ditched its PR department and doesn't take inquiries from the press.

"I still care about Tesla, vehicle safety, and finding and fixing bugs," Bernal said.

Destroy your AI models, and delete the data

The US Federal Trade Commission (FTC) is getting stricter with companies suspected of unlawfully collecting data, ordering them to not only delete the records but to also scrap any AI models trained on the info.

The regulatory body has included a requirement to destroy data and corresponding trained models in three settlements with businesses in the past three years, Protocol noted.

That's what happened to Weight Watchers this month, when it was accused of unlawfully siphoning data from an app aimed at getting young adults and children to eat more healthily. The company was ordered to destroy any machine-learning models that were built using the data.

The first time the FTC made this demand, it was directed at Cambridge Analytica. The second time it struck Everalbum, a photo-sharing app, when the company was accused of scraping selfies without permission to build a facial-recognition algorithm.

"Cambridge Analytica was a good decision, but I wasn't certain that that [the rule] was going to become a pattern," commented Pam Dixon, executive director of World Privacy Forum.

Dixon and other experts now believe the FTC will force more companies to delete any data obtained without consent as well as any models that might have been built using those samples.

GPT-3 can now edit text or code

OpenAI's language model typically responds to an input with an output. Give it another prompt, it comes up with something else. But now users can get GPT-3 to edit its output by editing its prompts.

You can see how it works in a short clip below:

Instead of having to rewrite or rerun input prompts from the beginning, users can directly edit the input text to get GPT-3 to edit its outputs. Being able to edit or insert new text will make it easier for developers using the GPT-3-powered Codex tool, and for people writing longer pieces of text.

"Codex was our original motivation for developing this capability, since in software development we typically add code to the middle of an existing file where code is present before and after the completion," OpenAI explained.

The editing function is free to use, but the insertion feature will cost you. ®

Similar topics

TIP US OFF

Send us news


Other stories you might like