This article is more than 1 year old

US cops kick back against facial recognition bans

Plus: DeepMind launches new generalist AI system, and Apple boffin quits over return-to-work policy

In brief Facial recognition bans passed by US cities are being overturned as law enforcement and lobbyist groups pressure local governments to tackle rising crime rates.

In July, the state of Virginia will scrap its ban on the controversial technology after less than a year. California and New Orleans may follow suit, Reuters first reported. Vermont adjusted its bill to allow police to use facial recognition software in child sex abuse investigations.

Elsewhere, efforts are under way in New York, Colorado, and Indiana to prevent bills banning facial recognition from passing. It's not clear if some existing vetoes set to expire, like the one in California, will be renewed. Around two dozen US state or local governments passed laws prohibiting facial recognition from 2019 to 2021. Police, however, believe the tool is useful in identifying suspects and can help solve cases especially in places where crime rates have risen.

"Technology is needed to solve these crimes and to hold individuals accountable," police superintendent Shaun Ferguson from New Orleans previously told reporters.

But some disagree.

"Police departments are exploiting people's fears about crime to amass more power," Jennifer Jones, a staff attorney for ACLU of Northern California, said. "This has been for decades, we see new technologies being pushed in moments of crisis."

General AI model can complete over 600 tasks

Researchers at DeepMind have built what they call "a generalist agent," an AI system capable of performing hundreds of different tasks from chatting to playing video games.

The model, dubbed Gato, is trained on a large number of datasets containing image and text data from simulated and real worlds. Powered by a single transformer-based network, the system can "play Atari, caption images, chat, stack blocks with a real robot arm and much more," according to a research paper [PDF].

Gato seems to pack quite a punch for a small model at just 1.2 billion parameters, much smaller than other transformer models like OpenAI's GPT-3. Although Gato can perform numerous tasks, it doesn't always perform as well as other networks trained to do a specialized task. 

Research labs and startups are racing to try and build what they call artificial general intelligence, technologies that can complete all sorts of tasks as well as a human can if not better. Gato explores the idea of a single system being able to perform multiple tasks, but it's not really intelligent in the same way as us. You can read more about the model here

Top AI expert reportedly leaves Apple over having to go back to the office

A director of machine learning at Apple has reportedly resigned over the company's return-to-work policy, which requires employees to go into the office three days a week starting from 23 May.

Ian Goodfellow led the iGiant's secretive "Special Projects Group" where he has worked for over three years. Rumor has it that he was involved in Apple's push to develop self-driving car software. Goodfellow was said to be unhappy about having to go back into the office, and reportedly decided to quit.

Goodfellow is best known for inventing generative adversarial networks, a type of neural network often used to create AI-generated images. The model StyleGAN, credited with producing hyperrealistic photos of fake humans, for example, is powered by a generative adversarial network. 

Apple and Goodfellow did not respond to our requests for confirmation. 

AI21 Labs releases new mid-sized language model

If you're looking for a more affordable language model to build upon, AI21 Labs just launched J1-Grande, a new, cheaper model with 17 billion parameters.

J1-Grande is a bit bigger than J1-Large (7.5 billion parameters), and much smaller than J1-Jumbo (178 billion parameters). Language models are computationally intensive to train and run. Developers using the API typically pay for the number of tokens generated by the system, with costs being more expensive when using larger models with higher performances.

The Israel-based startup said its latest language model is twice as fast as Jumbo, but at one-third of the price. 

"In fact, while Grande is significantly closer in size to J1-Large, a great majority of users have found that J1-Grande's quality is comparable to that of J1-Jumbo," it said in a blog post. "This is great news for all budget-conscious practitioners; J1-Grande, our mid-size model, offers access to supreme quality text generation at a more affordable rate." ®

More about

TIP US OFF

Send us news


Other stories you might like