Mental toll: Scale AI, Outlier sued by humans paid to steer AI away from our darkest depths

Who guards the guardrail makers? Not the bosses who hire them, it's alleged

Scale AI, which labels training data for machine-learning models, was sued this month, alongside labor platform Outlier, for allegedly failing to protect the mental health of contractors hired to protect people from harmful interactions with AI models.

The lawsuit [PDF], filed in a US federal district court in northern California, accuses Scale AI and Smart Ecosystem (doing business as Outlier) of misleading workers hired to label data for training AI – from associating words with pictures, to identifying dangerous input prompts – and neglecting to protect them from violent, harmful content they had to engage with as part of their work.

Scale AI disputes the allegations.

One of the common forms of machine learning, known as supervised learning, requires sets of labelled data to teach AI models how to map terms such as "cat" to images of cats. The technique is used not only for computer-vision models, but also for systems capable of inputting and outputting text and audio.

It can go deeper than simple object labeling: Humans can be paid to craft examples of model input prompts that result in undesirable output, so that these kinds of inputs can be identified and filtered when users try to use them; labelers can also score prompts and outputs based on their toxicity, etc, so that future inputs and outputs are suitably screened for normal folk in production. Humans can even be tasked with giving their own responses to past user input queries, so that models in future use those outputs.

Illustration of bad, possibly malicious AI as someone with a TV on their head

Just as your LLM once again goes off the rails, Cisco, Nvidia are at the door smiling

READ MORE

Data labeling has become a part of the AI supply chain, the process by which raw data is turned into a production model. As Privacy International observed last year, "Due to the vast quantities of labeled data required for supervised training that AI companies like OpenAI, which contracts the data services of Scale AI, and Microsoft, which contracts Surge AI, require, the AI supply chain has spread far and wide to countries like Kenya, India, the Philippines and Venezuela with cheaper and more quantities of labor."

Essentially, makers of AI models hire the likes of Scale AI and Outlier to improve the quality of their data, so their models perform better. The contracted firms in turn hire human workers, often for low wages, to apply labels to data and, as mentioned above, to respond to violent or disturbing potential or actual prompts posed to AI models. The goal is to mitigate a model's actual response, so the model won't, for example, provide suicide encouragement let alone guidance.

Scale AI and Outlier were sued in December, and again in January this year, in San Francisco Superior Court based on alleged labor violations, specifically underpaid wages. A separate lawsuit [PDF] filed in federal court in October, against Scale AI, Outlier, and another labor platform HireArt, alleges the firms laid off 500 people in August, in violation of California labor law.

The latest lawsuit, brought on behalf of six contract workers with a class action in mind, alleges those hired to build the guardrails around AI models were not themselves provided with protection.

"Defendants failed to provide their independent contractors, like plaintiffs and other class members, with proper guardrails to protect them from workplace conditions known to cause and exacerbate psychological harm," the complaint says.

The court filing explains how "Taskers," as the workers are called, may serve as "Super Attempters," who respond to questions posed by users to AI models in an effort to guide future responses, or as "Reviewers," who rate and label these human responses.

The problem is these prompts – such as "Man setting animal on fire," presumably submitted to a text-to-image model – may produce results that are upsetting. And contractors employed to screen and mitigate such stuff have to deal with such content on an ongoing basis.

"As a result of constant and unmitigated exposure to highly toxic and extremely disturbing prompts and images through the Outlier platform or other third-party platforms defendants required plaintiffs to use, plaintiffs developed and suffered from significant psychological distress and functional problems, including depression symptoms, anxiety, nightmares, and problems functioning in their work and relationships," the complaint explains.

"Those who viewed images of traumatic events such as rapes, assaults on children, murders, and fatal car accidents developed PTSD. Some of the images presented to Taskers appeared to depict real-life events and/or were perceived by Taskers as real."

The mental toll of exposure to extreme online content is well-established. In a post last year, Carlos Andrés Arroyave Bernal, director of the Master of Science (MsC) Program in Transdisciplinary Healthy Studies at Universidad Externado de Colombia, observed, "Among the mental health risks experienced by labelers is the need to observe material with high levels of violence or pornographic content. This situation is compounded by limited access to psychological support services or medical care. The precariousness of their working conditions also hinders their ability to address their well-being."

Content moderation work of this sort has led to similar lawsuits. In 2017, for example, Microsoft was sued by staff traumatized by scouring OneDrive files for child sexual abuse material. In 2020, more than 200 content moderators for Facebook wrote an open letter complaining about the mental health toll because AI algorithms were not adequate to review harmful stuff on their own. That was after a 2018 lawsuit Facebook content moderators filed against the social network alleging psychological harm from vetting Facebook posts.

We have numerous safeguards in place ... and access to health and wellness programs

The lawsuit against Scale AI and Outlier alleges negligence and violation of California's unfair competition law. And it seeks both damages and the implementation of a mental health monitoring regime for workers.

Outlier did not respond to requests for comment.

Scale AI spokesperson Joe Osborne told The Register, "Training GenAI models to prevent harmful and abusive content is critical to the safe development of AI. While some of the AI safety projects contributors work on involve sensitive content, we do not take on projects that may include child sexual abuse material. To support contributors doing this important work, we have numerous safeguards in place, including advanced notice of the sensitive nature of the work, the ability to opt-out at any time, and access to health and wellness programs."

Osborne also challenged the law firm that filed the complaint, which was also involved in the December and January wage lawsuits.

"Clarkson Law Firm has previously – and unsuccessfully – gone after innovative tech companies with legal claims that were summarily dismissed in court," said Osborne. "A federal court judge found that one of their previous complaints was 'needlessly long' and contained 'largely irrelevant, distracting, or redundant information.' The judge further questioned whether 'counsel can be trusted to adequately and responsibly represent the interests of absent class members in a federal lawsuit.'

"We plan to defend ourselves vigorously." ®

More about

TIP US OFF

Send us news


Other stories you might like