Hundreds of Facebook moderators complain: AI content moderation isn't working and we're paying for it

Human contractors battling COVID-19 stress and psychological trauma


Facebook’s AI algorithms aren’t effective enough to automatically screen for violent images or child abuse, leaving the job to human moderators who are complaining about having to come into an office to screen harmful content during the coronavirus pandemic.

In an open letter to the social media giant, over 200 content moderators said that the company’s technology was futile. “It is important to explain that the reason you have chosen to risk our lives is that this year Facebook tried using ‘AI’ to moderate content—and failed,” it said.

As COVID-19 spread across the world, Facebook ramped up its efforts to use machine learning algorithms to automatically remove toxic posts. The letter backed by Foxglove, a tech-focused non-profit, claimed that the technology would make it easier for human moderators since the worst content - graphic images of self-harm, violence, or child abuse - would be screened beforehand, leaving them with less harmful work like removing hate speech or misinformation.

Initially there was some success, Cori Crider, director of Foxglove, told The Register. “During the at-home work period, at first, we did have reports of a decrease in people’s exposure to graphic content. But then, it appears from Facebook’s own transparency documents that this meant non-violating content got taken down and problematic stuff like self harm stayed up. This is the source of the drive to force these people back to the office.”

The moderators are kept six-feet away from each other, but there have been numerous cases of staff members being infected with COVID-19 in multiple offices. “Workers have asked Facebook leadership, and the leadership of your outsourcing firms like Accenture and CPL, to take urgent steps to protect us and value our work. You refused. We are publishing this letter because we are left with no choice,” the letter continued.

Now, they have asked Facebook to let them work from home more and to provide higher wages to those going into the office. They also want the company to offer health care and mental health services to help them deal with the psychological trauma of content moderation.

A Facebook spokesperson told El Reg in a statement that the company already offers healthcare benefits and that most moderators have been working from home during the pandemic.

“We appreciate the valuable work content reviewers do and we prioritize their health and safety. While we believe in having an open internal dialogue, these discussions need to be honest," the spokesperson said.

"The majority of these 15,000 global content reviewers have been working from home and will continue to do so for the duration of the pandemic. All of them have access to health care and confidential wellbeing resources from their first day of employment, and Facebook has exceeded health guidance on keeping facilities safe for any in-office work.”

Although the moderators receive some support, they don’t get the same benefits as full-time Facebook employees do. “It is time to reorganize Facebook’s moderation work on the basis of equality and justice. We are the core of Facebook’s business. We deserve the rights and benefits of full Facebook staff,” the moderators concluded. ®

Similar topics


Other stories you might like

  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Samsung invests in ML chip startup NeuReality
    Coining the term hardware-based 'AI hypervisor' has to be worth several million, dontcha think?

    The venture capital arm of Samsung has cut a check to help Israeli inference chip designer NeuReality bring its silicon dreams a step closer to reality.

    NeuReality announced Monday it has raised an undisclosed amount of funding from Samsung Ventures, adding to the $8 million in seed funding it secured last year to help it get started.

    As The Next Platform wrote in 2021, NeuReality is hoping to stand out with an ambitious system-on-chip design that uses what the upstart refers to as a hardware-based "AI hypervisor."

    Continue reading
  • Zscaler bulks up AI, cloud, IoT in its zero-trust systems
    Focus emerges on workload security during its Zenith 2022 shindig

    Zscaler is growing the machine-learning capabilities of its zero-trust platform and expanding it into the public cloud and network edge, CEO Jay Chaudhry told devotees at a conference in Las Vegas today.

    Along with the AI advancements, Zscaler at its Zenith 2022 show in Sin City also announced greater integration of its technologies with Amazon Web Services, and a security management offering designed to enable infosec teams and developers to better detect risks in cloud-native applications.

    In addition, the biz also is putting a focus on the Internet of Things (IoT) and operational technology (OT) control systems as it addresses the security side of the network edge. Zscaler, for those not aware, makes products that securely connect devices, networks, and backend systems together, and provides the monitoring, controls, and cloud services an organization might need to manage all that.

    Continue reading
  • Amazon can't channel the dead, but its deepfake voices take a close second
    Megacorp shows Alexa speaking like kid's deceased grandma

    In the latest episode of Black Mirror, a vast megacorp sells AI software that learns to mimic the voice of a deceased woman whose husband sits weeping over a smart speaker, listening to her dulcet tones.

    Only joking – it's Amazon, and this is real life. The experimental feature of the company's virtual assistant, Alexa, was announced at an Amazon conference in Las Vegas on Wednesday.

    Rohit Prasad, head scientist for Alexa AI, described the tech as a means to build trust between human and machine, enabling Alexa to "make the memories last" when "so many of us have lost someone we love" during the pandemic.

    Continue reading

Biting the hand that feeds IT © 1998–2022