This article is more than 1 year old

Cruise self-driving cars stopped and clogged up San Francisco for hours

Plus: Head of AI and Autopilot at Tesla quits, FIFA rolls out AI in this year's football World Cup, and more

In brief Weeks after Cruise launched its autonomous, driverless taxi rides to the public in San Francisco, numerous vehicles mysteriously piled up and blocked several lanes of incoming traffic downtown.

At least seven cars were spotted milling around a spot in San Franscico's Civic Center neighborhood at night. The driverless vehicles had stopped for some reason, preventing nearby traffic from moving. It's not clear why or how these cars seemingly suffered a technical glitch. Some of these issues were raised in an anonymous letter sent to the California Public Utilities Commission, who claimed Cruise is looking to launch its commercial robotaxi service too early, the Wall Street Journal reported.

Tesla's head of AI leading Autopilot leaves

Andrej Karpathy, senior director of AI and an expert in computer vision helping Tesla develop self-driving cars, announced he was leaving after working at the company for five years.

There were rumors Karpathy wasn't going to come back after he said he was taking a four-month sabbatical earlier this year in March, according to Elektrek. Karpathy was hired to lead Tesla's AI and self-driving efforts in 2017 and left his previous role as a research scientist at OpenAI.

Top boss Elon Musk thanked him for his service via a message on Twitter. Karpathy leaves at a dicey time for the company. Tesla's share price has dropped amid worsening market conditions, and it has shut down one of its offices in San Mateo. It is also facing heightened scrutiny that could lead to the National Highway Traffic Safety Administration issuing a recall for hundreds of thousands of its cars.

Karpathy said he wasn't sure what he was going to do next but will focus on "technical work in AI, open source and education."

AI makes its way to the 2022 World Cup

AI-powered cameras will be deployed to help referees decide whether football players are offside in the upcoming 2022 World Cup tournament to be held in Qatar starting in November.

The technology involves placing a sensor inside the football and a series of cameras under the roof of the stadiums. The sensor will monitor its position on the football field, and the footage from cameras will be fed into machine learning algorithms capable of tracking the players' locations. 

When the software detects a player is offside, an alert will be sent to people at a nearby control room. The information will be relayed to the referee, who will then decide whether to call the offence or not. 

Pierluigi Collina, chairman of the FIFA Referees Committee, said the automated system will allow referees to make "faster and more accurate decisions," and said humans, not robots, were still in charge, according to The Verge. Gianni Infantino, FIFA's current president, said the technology had been three years in the making and only takes seconds to call offside.

AI ethics review of research, yay or nay?

Academic conferences are asking AI researchers to consider in technical papers how their research could potentially lead to societal harm, and not everyone is happy.

As AI and machine learning technology continues to progress in academia, it's inevitable some of these techniques will end up being deployed in real life. The applications often reveal algorithms can be used for good and bad. Improved computer vision algorithms, for example, are helping develop self-driving cars but are also used for surveillance purposes. 

AI-focused conferences like Neural Information Processing Systems and now the Conference on Computer Vision and Pattern Recognition are asking researchers to write paragraphs considering if and how their research could be harmful. But not everyone supports this initiative, Protocol reported. Some researchers believe it's beyond the scope of their work or could impact research freedom, while others recognized their work could be abused in certain use cases.

"We're still at a point in AI ethics where it's very hard for us to properly assess and mitigate ethics issues without the partnership of folks who are intimately involved in developing this technology," said Alice Xiang, who leads Sony Group's AI ethics office and was a general co-chair at the ACM Conference on Fairness, Accountability, and Transparency.

Clearview fined by Greek authorities

Controversial facial recognition startup Clearview was fined €20 million for violating privacy laws by Greece's Hellenic Data Protection Authority (HDPA).

The company was accused of violating current EU GDPR rules by failing to obtain explicit consent to use individuals' personal data when it scraped billions of photographs posted on the internet. These images were used to build Clearview's database for its face-matching algorithms.

Given a picture, the company's software searches for potential matches against images in its database to reveal someone's identity by linking to their social media profiles, for example.

The fine was the largest amount ever ordered by the HDPA, aaccording to The Record. A Clearview spokesperson claimed it "does not undertake any activities that would otherwise mean it is subject to the GDPR." ®

More about

TIP US OFF

Send us news


Other stories you might like