Google fails to get AI engineer lawsuit claiming wrongful termination thrown out
Plus: Apple is building its own large language models internally, and AI South Park is terrible
AI in brief Judges have tentatively rejected Google's request to dismiss a lawsuit filed by a former engineer who accused the company of firing him for challenging an internal AI chip design research project.
Satrajit Chatterjee, who was an engineering manager working at Google's AI research unit, claimed he was wrongfully terminated after he criticized his colleagues' research developing reinforcement learning algorithms to design chips.
Google claimed its software was superior at automatically generating the floorplans of chips compared to old-fashioned tools used in the hardware industry in a paper published in Nature. Chatterjee led a separate team that disputed the results in another paper that was not authorized for publication by Google.
He was fired after he raised the issue with top executives at the company, and later sued Google. Lawyers representing the company, however, tried to get the lawsuit thrown out. Judge Frederick Chung from the Superior Court in San Jose tentatively rejected Google's request, meaning the lawsuit will most likely go ahead.
Google will get a chance to contest the tentative ruling in a hearing next week before Chung makes a final decision on the matter, Bloomberg reported.
Meanwhile, Nature is scrutinizing Google's paper after other researchers raised issues and concerns with the work.
AI is being used to detect fare dodgers in subway stations
The Metropolitan Transportation Authority in New York has quietly rolled out AI software to try and catch people jumping the barriers at subway stations to avoid paying travel fares.
Seven subway stations have reportedly installed the technology already as of May, and authorities hope to expand it to more than two dozen more stations by the end year, according to NBC News.
It's not quite clear how the technology works. MTA spokesperson Joana Flores said it's being used to monitor the rate at which people are evading fares rather than trying to take direct legal action against them.
"We're using it essentially as a counting tool," said Minton. "The objective is to determine how many people are evading the fare and how they [are] doing it." She said that incidents are not reported to the police and did not clarify whether that policy might change in the future.
The software was created by a Spanish company, Awaait, that has built a system capable of tracking people on public transport, and sending photos of people it believes evaded paying fares to local transportation agents.
"This is a moment where movement around the city has never been more surveilled," warned Albert Fox Cahn, director of the Surveillance Technology Oversight Project, a nonprofit privacy rights org.
Apple has reportedly built its own framework to develop large language models capable of powering software and chatbots like OpenAI's ChatGPT or Google's Bard.
All the Big Tech companies are rushing to capitalize on the LLM craze, but Apple has stayed quiet about AI. People noticed that the company didn't even utter those dreaded two letters in its last WWDC event. Engineers, however, have been busy developing internal tools to build large language models.
- If you're going to train AI on our books, at least pay us, authors tell Big Tech
- Sarah Silverman, novelists sue OpenAI for scraping their books to train ChatGPT
- No, GPT-4 cannot get a computer science degree at MIT
- US export ban drives prices of Nvidia's latest GPUs sky high in China
Apple's homegrown framework, dubbed "Ajax," has been used to create its own "Apple GPT" chatbot, according to Bloomberg. CEO Tim Cook previously said during a conference call that the company will be deploying more AI features into its products on a "very thoughtful basis" since there are "number of issues that need to be sorted" with the technology.
Apple seems most concerned about privacy. Its chatbot app was reportedly created to test the software, and can only be accessed by employees that have been granted permission to use it. They have reportedly been warned that its outputs cannot be used to develop features for commercial applications.
AI-generated South Park is soulless and unfunny
A bizarre 22-minute-long South Park episode generated using AI that dropped this week is a reminder of how human TV and film writers are irreplaceable as they continue to strike in Hollywood.
You can watch the cartoon created by Fable, a San Francisco-based startup, below.
The visuals and sounds are very convincing, and it definitely could pass as a real South Park episode at first glance. But unlike the real show, it's not really funny at all. A paper describes a system that uses a mixture of AI models like GPT-4 and training custom diffusion models to generate dialogue for characters and the corresponding images to produce a series of scenes that are then stitched together and animated to create a full episode.
Although the tool generates content, a human user is still required to control the creative process. The work appears to be part of some wider project dubbed The Simulation, which appears to be some made-up company with a fake address and AI-generated employees.
"Powerful LLMs such as GPT-4 were trained on a large corpus of TV show data which lets us believe that with the right guidance users will be able to rewrite entire seasons," researchers at Fable wrote in their paper. Will this technology really take off? ®