This article is more than 1 year old

Detroit cops employed facial recognition algos that only misidentifies suspects 96 per cent of the time

Plus: Experts back a national cloud for ML research

In brief Cops in Detroit have admitted using facial-recognition technology that fails to accurately identify potential suspects a whopping 96 per cent of the time.

The revelation was made by the American police force's chief James Craig during a public hearing, this week. Craig was grilled over the wrongful arrest of Robert Williams, who was mistaken as a shoplifter by facial-recognition software used by officers.

“If we would use the software only [to identify subjects], we would not solve the case 95-97 per cent of the time,” Craig said, Vice first reported. “That’s if we relied totally on the software, which would be against our current policy … If we were just to use the technology by itself, to identify someone, I would say 96 per cent of the time it would misidentify."

The software was developed by DataWorks Plus, a biometric technology biz based in South Carolina. Multiple studies have demonstrated facial-recognition algorithms often struggle with identifying women and people with darker skin compared to Caucasian men.

US national AI cloud mulled

A bipartisan law bill calling for the US government to set up a cloud platform allowing researchers to access computational resources for public AI research was submitted to Congress for consideration, this week.

The National AI Research Resource Task Force Act was drawn up by politicians across the House of Representatives and the Senate. Several universities and research institutions, including Google, OpenAI, and Nvidia, also announced their support for the legislation.

“It is an essential first step towards establishment of a national resource that would accelerate and strengthen AI research across the U.S. by removing the high-cost barrier to entry of compute and data resources,” said Eric Schmidt, ex-CEO of Google and chairman of the National Security Commission on Artificial Intelligence.

”If realized, this infrastructure would democratize AI R&D outside of elite universities and big technology companies and further enable the application of AI approaches across scientific fields and disciplines, unlocking breakthroughs that will drive growth in our economy and strengthen national security.”

Transformer models just keep getting bigger

Folks over at Google have built a set of new API tools to scale up machine-learning models that contain more than 600 billion parameters.

The paper [PDF] is pretty technical, and it describes various techniques to split up and train such large models more easily. The Googlers demonstrated their ideas by testing it on a massive transformer-based machine translation model. They trained the system containing over 600 billion parameters on 2,048 TPU v3 math accelerators for four days.

The giant model was able to translate 100 different languages into English and “achieved far superior translation quality compared to prior art,” the researchers claimed. The system is a beefed up version of Sparsely-Gated Mixture-of-Experts, a system introduced in 2017 that initially had 137 billion parameters. ®

More about

TIP US OFF

Send us news


Other stories you might like