This article is more than 1 year old
Clearview AI accused over free trials to US police that were plausibly deniable
Plus: Another Google AI boffin resigns and AI tries to recreate music from famous musicians who died at 27
In Brief A year-long investigation into Clearview, the dodgy facial recognition startup, has revealed how its software has been used by over 1,800 public agencies in an attempt to identify over 7,000 people from 2018 to 2020.
The data collected by BuzzFeed News showed just how haphazardly the machine learning software was used. In an attempt to win customer contracts, Clearview gave out free trials to public agencies, including law enforcement and even places, like the Department of Fish and Wildlife in Washington and Minnesota’s Commerce Fraud Bureau.
Employees could apparently use the technology on whomever they wanted, whether they were trying to identify a suspect in a criminal case or students at universities. In one case that was particularly disturbing, police officers in Alameda, California continued to use Clearview's tools although the local City Council voted to ban the use of public facial recognition tools in 2019.
AI algorithms aren’t perfect, and particularly struggle with correctly identifying women and people of colour. The data has been compiled into a handy searchable database.
Google AI Research manager resigns after org is reshuffled
The manager that oversaw Google’s AI ethics unit, which has seen two researchers pushed out, has resigned.
Samy Bengio, a well-known name in the academic world of machine learning, has become the most senior member of the Chocolate Factory to leave after it controversially ousted Timnit Gebru and Margaret Mitchell.
Although Bengio did not explicitly say why he decided to leave in an email to his colleagues, he hinted at the recent fiasco, where Google fired its Ethical AI team co-leads over a paper that was critical about massive language models.
“I learned so much with all of you, in terms of machine learning research of course, but also on how difficult yet important it is to organize a large team of researchers so as to promote long term ambitious research, exploration, rigor, diversity and inclusion,” he wrote, according to Bloomberg.
Google has since reshuffled the management of its AI research teams.
Intel’s AI chips are going into a new academic supercomputer
Chipzilla's own-brand machine learning chips will be used to build Voyager, a new supercomputer for the University of California, San Diego, and it's expected to be up and running later this year.
Intel has been trying to give Nvidia a run for its money by developing its own training and inference chips to challenge the GPU. But it fell woefully behind and abandoned previous attempts led by Nervana, a startup it acquired in 2016.
It later snapped up Habana, another AI hardware startup in 2019, and it was out with the old and in with the new. Now, it appears some of Intel’s or, rather, Habana’s hard work is paying off.
“The Voyager supercomputer will use Habana’s unique interconnectivity technology to efficiently scale AI capacity with 336 Gaudi processors for training and 16 Habana Goya processors for AI inference,” the company said in a statement.
It’s difficult to work out just how good these chips really are, however, an Intel representative declined to comment on the performance of the supercomputer or provide a detailed breakdown of the chips’ specs.
Listen to AI-generated rock music
A non-profit organisation focused on mental health and music published a series of AI-generated songs stylized after artists, who tragically died at the age of 27 by suicide or drug-related incidents.
The project, named Lost Tapes of the 27 Club, has songs based on artists like Jimi Hendrix, Janis Joplin, Kurt Cobain, Amy Winehouse, and more. The group hasn't released details of exactly how they've managed to mix the tracks, but there are some interesting mashups.
You can listen to the tracks here. The non-profit, Over The Bridge, hopes that this will raise awareness of mental health issues. ®