This article is more than 1 year old

Bad boys bad boys, what you gonna do? Los Angeles Police Department found fibbing about facial recognition use

Plus: CIA launches new research lab for AI experts, and more

In brief The Los Angeles Police Department has run facial recognition algorithms a whopping 29,817 times over a decade in an attempt to identify suspected criminals captured in CCTV footage, despite promising it wouldn't.

Officers used software built by DataWorks Plus, the same biometrics company whose technology led to two wrongful arrests by the Detroit Police Department, according to the Los Angeles Times who discovered the setup. Technically the LAPD does not have its own tools, it instead outsources the use of machine-learning algorithms through a database of mugshots compiled by the Los Angeles County Sheriff’s Department.

LAPD has consistently denied using the controversial technology, but it had, in fact, run machine learning algorithms nearly 30,000 times over a time span between November 2009, and September 2020. “We actually do not use facial recognition in the department,” an LAPD spokesperson previously told the LA Times last year, though adding it had only been deployed in “a few limited instances.”

Experts have repeatedly called for a moratorium on technology as it often struggles with identifying women and people with darker skin more accurately than causasian men, leading to racial biases.

Want to work with the CIA?

Uncle Sam’s Central Intelligence Agency has launched a new division focused on the research and development of bleeding-edge technologies to help it spy on other nations.

It’s interested in all sorts of trendy ideas across different areas, from AI and autonomous robots to virtual reality and blockchain. The launch of CIA Labs opens up an official channel to seek partnerships with experts working outside of the agency, like other federal research organizations or academic institutions.

“Some phenomenal innovations have come from CIA over the years, and with CIA Labs, we’re now better positioned to optimize developments and further invest in our scientists and technologists,” Dawn Meyerriecks, head of CIA’s Directorate of Science and Technology, said in a statement this week. “In an evolving threat landscape, CIA Labs will help us maintain our competitive edge and protect our nation.”

Salaries for federal employees are notoriously low, compared to the private sector, making it difficult to attract and retain talent. CIA Labs gives its officers a way to boost pay by promising them the rights to obtain patents and licensing of their IP, effectively commercializing their technology. Profits are capped at $150,000 per year, according to MIT Tech Review.

AI algorithms to detect YouTube videos that need to be rated 18+

YouTube is using AI algorithms to automatically determine if a particular video deserves an age restriction rating or not.

Content that contains things like swear words, nudity, violence, or drug use is slapped with a warning that orders users to log into their YouTube accounts to verify their age. Users under the age of 18 are prevented from pressing play on the moderated video. Now, YouTube wants AI to help it flag naughty videos.

“Going forward, we will build on our approach of using machine learning to detect content for review, by developing and adapting our technology to help us automatically apply age-restrictions,” the Alphabet-owned video sharing platform said this week. “Uploaders can appeal the decision if they believe it was incorrectly applied.”

YouTube also said that it expected more uploaded content to be age-restricted. Younger teens or children trying to access a particular video via a third-party website will be directed to the platform and be forced to watch it.

AI might help YouTube add age restrictions on its videos more quickly, but it’s not a foolproof method of keeping kids away. The easiest way to game the system is to just make an account using a fake birthday, duh. ®

More about

TIP US OFF

Send us news


Other stories you might like