Facebook ditches its creepy, controversial robot – yes, its facial-recognition AI

Social network is going to sit this one out until clear rules are formulated


Updated Having last week sidelined the tarnished brand Facebook to conduct business under the name Meta, the social ad biz intends to deactivate at least some of its facial-recognition systems in a few weeks.

In a blog post on Tuesday, Jerome Pesenti, VP of artificial intelligence at Facebook, said the social ad platform – still referred to as Facebook – is "shutting down the Face Recognition system on Facebook."

Pesenti describes the shift as part of a company-wide move away from the use of facial recognition in its products. While he continues to see positive uses for the technology – which the company intends to continue to develop under the rubric of its Responsible Innovation framework – he acknowledges that current social concerns about the technology need to be addressed.

"[T]he many specific instances where facial recognition can be helpful need to be weighed against growing concerns about the use of this technology as a whole," said Pesenti. "There are many concerns about the place of facial recognition technology in society, and regulators are still in the process of providing a clear set of rules governing its use. Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate."

Facebook was among the early users of the technology, implementing it back in 2010 to help identify people in photos posted by users of the site.

The technology was immediately controversial and has become more so as it has proliferated. Facebook was sued in 2015 over its use of facial recognition in Illinois under the state's 2008 Biometric Information Privacy Act and recently agreed to settle that lawsuit for $650m.

Other companies that have pushed to deploy the technology in the US, like Clearview AI, have faced lawsuits of their own.

By casting facial recognition aside for the time being, Facebook may be able to avoid embarrassing incidents like having its video recommendation system label black people as primates.

Pesenti said the shutdown, which involves the deletion of more than a billion facial recognition templates of people's face characteristics, will affect some of Facebook's services. Those who have opted into Facebook's image tagging will no longer be automatically identified in photos and videos. And Automatic Alt Text (AAT), which generates image alt tags that describe images for the blind and visually-impaired, will no longer supply names for people recognized in photos.

Advocacy groups have endorsed Facebook's decision to distance itself from the technology.

"This is good news," said Adam Schwartz, senior staff attorney at the Electronic Frontier Foundation, in an email to The Register. "Face recognition invades our privacy, unfairly discriminates against people of color, and chills free speech. Facebook leaving the face recognition business reflects a growing national discomfort with this dangerous technology."

Schwartz said he couldn't speculate about whether or not the decision followed from the litigation in Illinois. "Big companies presumably listen to their consumers and respond to privacy laws," he said. "The momentum is against face recognition technology."

“Facial recognition is one of the most dangerous and politically toxic technologies ever created," said Caitlin Seeley George, campaign director at Fight for the Future, in an email to The Register. "Even Facebook knows that."

"Companies like Delta and Macy’s should ask themselves what they’re thinking by expanding and doubling down on their use of the highly dangerous technology. Lawmakers should ask why they’re allowing government contracts with Clearview instead of banning use of the technology."

Facial recognition, said Seeley George, misidentifies people of color, leading to wrongful arrests, and denies people the opportunity to live life without being constantly surveilled. She argues that governments, law enforcement, and private companies cannot be trusted with such invasive technology.

"[A]s algorithms improve, facial recognition will only be more dangerous," said Seeley George. "This technology will enable authoritarian governments to target and crack down on religious minorities and political dissent; it will automate the funneling of people into prisons without making us safer; it will create new tools for stalking, abuse, and identity theft."

"There is only [one] logical action for lawmakers and companies: it should be banned," she said. ®

Updated to add

We were careful to say Facebook is ditching at least some of its facial-recognition tech. Keyword: Facebook. Meta, and its goofy 3D fantasy metaverse, reportedly hasn't ruled out using the AI technology as well as biometrics in various shapes and forms in future.


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022