Rekognition still racist, politicians desperate over deepfakes, and a good reason to go to (some) music festivals

Let's bring you up to speed on the latest misuses of machine-learning tech

Roundup Here's our latest summary of AI news beyond what we've already covered. It’s all about two favourite topics in machine learning today: facial recognition and deepfakes.

Over 40 festivals pledge to not use facial recognition: A campaign against facial recognition led by the nonprofit Fight for the Future has led to over 40 music festivals publicly committing that they would not use the technology.

Evan Greer, deputy director, and Tom Morello, a musician and guitarist for rock band Rage Against the Machine, teamed up to pen an op-ed celebrating the efforts to push back on the smart AI cameras.

“Over the last month, artists and fans waged a grassroots war to stop Orwellian surveillance technology from invading live music events,” they wrote on Buzzfeed News. “Today we declare victory. Our campaign pushed more than 40 of the world’s largest music festivals — like Coachella, Bonnaroo, and SXSW — to go on the record and state clearly that they have no plans to use facial recognition technology at their events.”

Musicians and fans were invited to write to their favorite festival organizers, urging them to not support facial recognition. Now, the list of festivals that have confirmed they won’t be using the tech has grown. There are still a few top names that have yet to respond, however, including Burning Man and Outside Lands. You can see the complete list here.

Amazon’s facial recognition tool fails on black athletes: Amazon’s controversial Rekognition software mistook the faces of 27 black athletes competing in American football, baseball, basketball, and hockey, as suspected criminals in a mugshot database.

An experiment by the American Civil Liberties Union (ACLU) revealed the dangers of relying on facial recognition technology like Rekognition.

“This technology is flawed,” said Duron Harmon, a football player for the New England Patriots safety whose face was false identified in the experiment. “If it misidentified me, my teammates, and other professional athletes in an experiment, imagine the real-life impact of false matches. This technology should not be used by the government without protections. Massachusetts should press pause on face surveillance technology.”

The ACLU took headshots of 188 athletes from the Boston Bruins, Boston Celtics, Boston Red Sox, and New England Patriots, and ran them across a database containing 20,000 criminal arrest photos to see whether there would be any matches. There shouldn’t be any. But nearly one-in-six athletes were mistakenly identified.

A similar study performed by the ACLU with US Congress members last year in July revealed that Rekognition struggled to identify politicians with darker skin. It sparked the ACLU’s campaign to stop top tech companies like Amazon, Microsoft, and Google from supplying facial recognition software to federal government agencies.

Okay, moving on to deepfakes - a term used to describe visual and audio content generated by neural networks to dupe people into believing fake information.

Senate passed a bill to understand deepfakes more: The US Senate passed a bipartisan bill that would require the Department of Homeland Security to publish a detailed report into the risks of deepfakes.

The Deepfake Report Act introduced by Rob Portman (R-OH) was first mooted in September. The bill dictates that the DHS must “produce a report on the state of digital content forgery technology” annually for the next five years, according to The Hill.

Senators are particularly interested in how deepfakes will improve and evolve over time, how they can be used to commit financial fraud, and how foriegn adversaries can use them to undermine America’s national security.

It just so happened that the House Committee on Homeland Security also discussed the threats of deepfakes in a hearing this week. Experts were called in to give evidence about possible future threats as the technology becomes more and more refined. Foreign adversaries could create deepfakes of politicians, duping another country’s citizens into believing fake news and swaying their opinions. If the false content could be carefully planted during elections to sow discord and threaten democracies.

We covered that hearing and you can read more about that here.

Facebook’s effort to fight deepfakes: The social media giant recently announced an open challenge encouraging AI engineers from the industry and academia to build algorithms that can detect deepfakes.

Now, it has released a research paper and the dataset containing 5,000 videos that have been edited by two algorithms to create deepfakes. Participants taking on the Deepfake Detection Challenge (DFC) will have to craft models that can detect these fake videos by training them on the dataset provided.

The videos contains footage made up of about 74 percent female and 26 percent male subjects; and 68 percent Caucasian, 20 percent African-American, 9 percent east-Asian, and 3 percent south-Asian people. In order to train robust detection models, it’s important that the training dataset is diverse.

You can register to download the dataset now, and the challenge begins in December later this year. ®

Broader topics

Other stories you might like

  • D-Wave deploys first US-based Advantage quantum system
    For those that want to keep their data in the homeland

    Quantum computing outfit D-Wave Systems has announced availability of an Advantage quantum computer accessible via the cloud but physically located in the US, a key move for selling quantum services to American customers.

    D-Wave reported that the newly deployed system is the first of its Advantage line of quantum computers available via its Leap quantum cloud service that is physically located in the US, rather than operating out of D-Wave’s facilities in British Columbia.

    The new system is based at the University of Southern California, as part of the USC-Lockheed Martin Quantum Computing Center hosted at USC’s Information Sciences Institute, a factor that may encourage US organizations interested in evaluating quantum computing that are likely to want the assurance of accessing facilities based in the same country.

    Continue reading
  • Bosses using AI to hire candidates risk discriminating against disabled applicants
    US publishes technical guide to help organizations avoid violating Americans with Disabilities Act

    The Biden administration and Department of Justice have warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

    Under the ADA, employers must provide adequate accommodations to all qualified disabled job seekers so they can fairly take part in the application process. But the increasing rollout of machine learning algorithms by companies in their hiring processes opens new possibilities that can disadvantage candidates with disabilities. 

    The Equal Employment Opportunity Commission (EEOC) and the DoJ published a new document this week, providing technical guidance to ensure companies don't violate ADA when using AI technology for recruitment purposes.

    Continue reading
  • How ICE became a $2.8b domestic surveillance agency
    Your US tax dollars at work

    The US Immigration and Customs Enforcement (ICE) agency has spent about $2.8 billion over the past 14 years on a massive surveillance "dragnet" that uses big data and facial-recognition technology to secretly spy on most Americans, according to a report from Georgetown Law's Center on Privacy and Technology.

    The research took two years and included "hundreds" of Freedom of Information Act requests, along with reviews of ICE's contracting and procurement records. It details how ICE surveillance spending jumped from about $71 million annually in 2008 to about $388 million per year as of 2021. The network it has purchased with this $2.8 billion means that "ICE now operates as a domestic surveillance agency" and its methods cross "legal and ethical lines," the report concludes.

    ICE did not respond to The Register's request for comment.

    Continue reading
  • Fully automated AI networks less than 5 years away, reckons Juniper CEO
    You robot kids, get off my LAN

    AI will completely automate the network within five years, Juniper CEO Rami Rahim boasted during the company’s Global Summit this week.

    “I truly believe that just as there is this need today for a self-driving automobile, the future is around a self-driving network where humans literally have to do nothing,” he said. “It's probably weird for people to hear the CEO of a networking company say that… but that's exactly what we should be wishing for.”

    Rahim believes AI-driven automation is the latest phase in computer networking’s evolution, which began with the rise of TCP/IP and the internet, was accelerated by faster and more efficient silicon, and then made manageable by advances in software.

    Continue reading
  • Pictured: Sagittarius A*, the supermassive black hole at the center of the Milky Way
    We speak to scientists involved in historic first snap – and no, this isn't the M87*

    Astronomers have captured a clear image of the gigantic supermassive black hole at the center of our galaxy for the first time.

    Sagittarius A*, or Sgr A* for short, is 27,000 light-years from Earth. Scientists knew for a while there was a mysterious object in the constellation of Sagittarius emitting strong radio waves, though it wasn't really discovered until the 1970s. Although astronomers managed to characterize some of the object's properties, experts weren't quite sure what exactly they were looking at.

    Years later, in 2020, the Nobel Prize in physics was awarded to a pair of scientists, who mathematically proved the object must be a supermassive black hole. Now, their work has been experimentally verified in the form of the first-ever snap of Sgr A*, captured by more than 300 researchers working across 80 institutions in the Event Horizon Telescope Collaboration. 

    Continue reading
  • Shopping for malware: $260 gets you a password stealer. $90 for a crypto-miner...
    We take a look at low, low subscription prices – not that we want to give anyone any ideas

    A Tor-hidden website dubbed the Eternity Project is offering a toolkit of malware, including ransomware, worms, and – coming soon – distributed denial-of-service programs, at low prices.

    According to researchers at cyber-intelligence outfit Cyble, the Eternity site's operators also have a channel on Telegram, where they provide videos detailing features and functions of the Windows malware. Once bought, it's up to the buyer how victims' computers are infected; we'll leave that to your imagination.

    The Telegram channel has about 500 subscribers, Team Cyble documented this week. Once someone decides to purchase of one or more of Eternity's malware components, they have the option to customize the final binary executable for whatever crimes they want to commit.

    Continue reading
  • Ukrainian crook jailed in US for selling thousands of stolen login credentials
    Touting info on 6,700 compromised systems will get you four years behind bars

    A Ukrainian man has been sentenced to four years in a US federal prison for selling on a dark-web marketplace stolen login credentials for more than 6,700 compromised servers.

    Glib Oleksandr Ivanov-Tolpintsev, 28, was arrested by Polish authorities in Korczowa, Poland, on October 3, 2020, and extradited to America. He pleaded guilty on February 22, and was sentenced on Thursday in a Florida federal district court. The court also ordered Ivanov-Tolpintsev, of Chernivtsi, Ukraine, to forfeit his ill-gotten gains of $82,648 from the credential theft scheme.

    The prosecution's documents [PDF] detail an unnamed, dark-web marketplace on which usernames and passwords along with personal data, including more than 330,000 dates of birth and social security numbers belonging to US residents, were bought and sold illegally.

    Continue reading
  • Another ex-eBay exec admits cyberstalking web souk critics
    David Harville is seventh to cop to harassment campaign

    David Harville, eBay's former director of global resiliency, pleaded guilty this week to five felony counts of participating in a plan to harass and intimidate journalists who were critical of the online auction business.

    Harville is the last of seven former eBay employees/contractors charged by the US Justice Department to have admitted participating in a 2019 cyberstalking campaign to silence Ina and David Steiner, who publish the web newsletter and website EcommerceBytes.

    Former eBay employees/contractors Philip Cooke, Brian Gilbert, Stephanie Popp, Veronica Zea, and Stephanie Stockwell previously pleaded guilty. Cooke last July was sentenced to 18 months behind bars. Gilbert, Popp, Zea and Stockwell are currently awaiting sentencing.

    Continue reading

Biting the hand that feeds IT © 1998–2022