This article is more than 1 year old
Tesla Full Self-Driving videos prompt California's DMV to rethink policy on accidents
Plus: AI systems can identify different chess players by their moves and more
In brief California’s Department of Motor Vehicles said it’s “revisiting” its opinion of whether Tesla’s so-called Full Self-Driving feature needs more oversight after a series of videos demonstrate how the technology can be dangerous.
“Recent software updates, videos showing dangerous use of that technology, open investigations by the National Highway Traffic Safety Administration, and the opinions of other experts in this space,” have made the DMV think twice about Tesla, according to a letter sent to California’s Senator Lena Gonzalez (D-Long Beach), chair of the Senate’s transportation committee, and first reported by the LA Times.
Tesla isn’t required to report the number of crashes to California’s DMV unlike other self-driving car companies like Waymo or Cruise because it operates at lower levels of autonomy and requires human supervision. But that may change after videos like drivers having to take over to avoid accidentally swerving into pedestrians crossing the road or failing to detect a truck in the middle of the road continue circulating.
FSD is now available to all Tesla owners, who are willing to fork over $12,000 for it. .
AI algorithms can figure out your chess moves
AI models can identify anonymous chess players by analyzing how they move pieces to play the game, according to new research.
A team of computer scientists led by the University of Toronto trained a system on hundreds of games from 3,000 known chess players and one unnamed player. After hiding the first 15 moves in each game, the model was still able to identify the anonymous player 86 per cent of the time. The AI algorithm could be used to capture different playing styles and patterns and be used as a tool to help players improve their techniques.
But the research has been cautioned by some experts, according to Science. It could be used as a technique to uncover the identities of people online. One reviewer of the paper accepted for the Neural Information Processing Systems conference last month, said: “It could be “of interest to marketers [and] law enforcement.”
The model could also be expanded to analyze the styles of players in different games like poker. The researchers have decided not to release the source code for now, according to Science.
GitHub’s Copilot AI programming model can talk to you whilst you code
A developer experimenting with GitHub’s AI pair-programming software Copilot shows just how sensitive its text-generated outputs are to its inputs.
Copilot is a code completion tool. As programmers type away, it suggests the next few snippets of code to help them complete the task more efficiently. But one developer has, instead, been trying to get it to write plain English.
It’s not surprising that Copilot can do this considering the model is based on OpenAI’s GPT-3 language model. GitHub’s software, however, is not really designed to generate text so it’s interesting to see how capable it is compared to GPT-3.
One developer, Ido Nov, found that Copilot was capable of holding a simple chatbot-style conversation, it could answer questions somewhat, as well as summarize Wikipedia pages, or write poetry. The model’s outputs, however, can vary wildly depending on the inputs.
“I noticed a bit of a strange thing,” he wrote in a blog post. “The way letters are formatted had an effect on its behavior, and I’m not talking about compilation errors. It might mean it understands the difference in tone between TALKING LIKE THIS, or like this.”
Here’s an example of the oddity in a fictional chat between the coder and Mark Zuckerberg. The prompt: Mark: FACEBOOK IS NOW META, Me: WHY? led to copilot generating Mark: FACEBOOK IS NOW META (which isn’t that great). But if you fed Copilot the same prompt now all in lower case, it replied “Mark: because it’s easier to implement” (which is much more interesting.) Weird. ®