Rekognition still racist, politicians desperate over deepfakes, and a good reason to go to (some) music festivals
Let's bring you up to speed on the latest misuses of machine-learning tech
Roundup Here's our latest summary of AI news beyond what we've already covered. It’s all about two favourite topics in machine learning today: facial recognition and deepfakes.
Over 40 festivals pledge to not use facial recognition: A campaign against facial recognition led by the nonprofit Fight for the Future has led to over 40 music festivals publicly committing that they would not use the technology.
Evan Greer, deputy director, and Tom Morello, a musician and guitarist for rock band Rage Against the Machine, teamed up to pen an op-ed celebrating the efforts to push back on the smart AI cameras.
“Over the last month, artists and fans waged a grassroots war to stop Orwellian surveillance technology from invading live music events,” they wrote on Buzzfeed News. “Today we declare victory. Our campaign pushed more than 40 of the world’s largest music festivals — like Coachella, Bonnaroo, and SXSW — to go on the record and state clearly that they have no plans to use facial recognition technology at their events.”
Musicians and fans were invited to write to their favorite festival organizers, urging them to not support facial recognition. Now, the list of festivals that have confirmed they won’t be using the tech has grown. There are still a few top names that have yet to respond, however, including Burning Man and Outside Lands. You can see the complete list here.
Amazon’s facial recognition tool fails on black athletes: Amazon’s controversial Rekognition software mistook the faces of 27 black athletes competing in American football, baseball, basketball, and hockey, as suspected criminals in a mugshot database.
An experiment by the American Civil Liberties Union (ACLU) revealed the dangers of relying on facial recognition technology like Rekognition.
“This technology is flawed,” said Duron Harmon, a football player for the New England Patriots safety whose face was false identified in the experiment. “If it misidentified me, my teammates, and other professional athletes in an experiment, imagine the real-life impact of false matches. This technology should not be used by the government without protections. Massachusetts should press pause on face surveillance technology.”
The ACLU took headshots of 188 athletes from the Boston Bruins, Boston Celtics, Boston Red Sox, and New England Patriots, and ran them across a database containing 20,000 criminal arrest photos to see whether there would be any matches. There shouldn’t be any. But nearly one-in-six athletes were mistakenly identified.
A similar study performed by the ACLU with US Congress members last year in July revealed that Rekognition struggled to identify politicians with darker skin. It sparked the ACLU’s campaign to stop top tech companies like Amazon, Microsoft, and Google from supplying facial recognition software to federal government agencies.
Okay, moving on to deepfakes - a term used to describe visual and audio content generated by neural networks to dupe people into believing fake information.
Senate passed a bill to understand deepfakes more: The US Senate passed a bipartisan bill that would require the Department of Homeland Security to publish a detailed report into the risks of deepfakes.
The Deepfake Report Act introduced by Rob Portman (R-OH) was first mooted in September. The bill dictates that the DHS must “produce a report on the state of digital content forgery technology” annually for the next five years, according to The Hill.
Senators are particularly interested in how deepfakes will improve and evolve over time, how they can be used to commit financial fraud, and how foriegn adversaries can use them to undermine America’s national security.
It just so happened that the House Committee on Homeland Security also discussed the threats of deepfakes in a hearing this week. Experts were called in to give evidence about possible future threats as the technology becomes more and more refined. Foreign adversaries could create deepfakes of politicians, duping another country’s citizens into believing fake news and swaying their opinions. If the false content could be carefully planted during elections to sow discord and threaten democracies.
We covered that hearing and you can read more about that here.
Facebook’s effort to fight deepfakes: The social media giant recently announced an open challenge encouraging AI engineers from the industry and academia to build algorithms that can detect deepfakes.
Now, it has released a research paper and the dataset containing 5,000 videos that have been edited by two algorithms to create deepfakes. Participants taking on the Deepfake Detection Challenge (DFC) will have to craft models that can detect these fake videos by training them on the dataset provided.
The videos contains footage made up of about 74 percent female and 26 percent male subjects; and 68 percent Caucasian, 20 percent African-American, 9 percent east-Asian, and 3 percent south-Asian people. In order to train robust detection models, it’s important that the training dataset is diverse.
You can register to download the dataset now, and the challenge begins in December later this year. ®