Judge demands social media sites prove they didn't help radicalize mass shooter

Section 230 on trial: Plaintiffs contend social media platforms are defective products, not protected message boards

Some of the largest social media platforms in the world will soon try to convince a US court their platforms did not contribute to the radicalization of a mass shooter who killed ten people and injured three more in a New York grocery store in 2022.

Depending on the outcome, the case could reshape liability rules for social media sites.

In a court order [PDF] published on Tuesday, New York 8th district Supreme Court justice Paula Feroleto denied a dismissal request from Meta, Reddit, Twitch's company Amazon, YouTube owner Alphabet, plus Discord and 4Chan. They all need to go to court to argue their case.

"Many of the social media/internet defendants have attempted to establish that their platforms are mere message boards," Feroleto wrote.

"This may ultimately prove true," the judge noted. However, "the Court has determined the complaint sufficiently pleads viable causes of action to go forward at this stage of the litigation," she added.

Section 230 goes to court

At issue in the case is how the platforms characterize their services, versus plaintiffs' contention that they were liable for Payton Gendron's shooting at a Tops Friendly Markets store in Buffalo, New York, in a predominantly Black neighborhood of the city.

According to the complaint, Gendron – who has pleaded guilty, was sentenced to life in prison and is awaiting a federal death penalty trial – was radicalized by the content he discovered on the defendants' platforms.

"By his own admission, Gendron, a vulnerable teenager, was not racist until he became addicted to social media apps and was lured, unsuspectingly, into a psychological vortex by defective social media applications designed, marketed, and pushed out by Social Media Defendants," the plaintiffs argue in their complaint [PDF].

The platforms argue their sites are simply message boards, and that they are protected by the First Amendment right to freedom of speech and the Communications Decency Act (CDA) – specifically Section 230, which protects platforms from liability for the content posted by their users.

The plaintiffs are families of people killed by Gendron and agree that the CDA and First Amendment protects the platforms. However, as the judge notes, that's not the argument they're making.

"[Plaintiffs] instead contend the defendants' platforms are negligently, defectively and harmfully designed 'products' that drove Gendron to specific materials and that they are therefore liable," Feroleto wrote. If viewed as products and not platforms, Section 230 is irrelevant, the plaintiffs argue.

New York law has long established that product manufacturers are liable for harm incurred by any person injured as a result of their products – even if they weren't using the product themselves, the judge said.

"Contrary to the defense assertions, at this stage of the proceedings, the plaintiffs' allegations concerning product liability establish a basis for 'duty' to these plaintiffs," Feroleto found. As such, the case can proceed.

It's not clear when – or even if – the case will go to trial. But it's headed that way unless the platforms successfully appeal or settle the matter. The result could set a precedent that makes it harder to use Section 230 as a get-out-of-jail-free card for social media platforms – something that US officials have long tried to do.

We asked all of the affected platforms for comment on this matter. Reddit and YouTube, the only two platforms to respond to our questions, both indicated they intend to appeal.

YouTube told us it has the deepest sympathies for victims and their families, and has invested in tech and policies to identify and remove hate content for years.

"While we disagree with today's decision and will be appealing, we will continue to work with law enforcement, other platforms, and civil society to share intelligence and best practices," a YouTube spokesperson told The Register.

Reddit declared that hate and violence have no place on its platform and pointed us to its content policy that prohibits hate content based on identity or vulnerability as well as messages that glorify, encourage or incite violence.

"We are constantly evaluating ways to improve our detection and removal of this content, including through enhanced image-hashing systems, and we will continue to review the communities on our platform to ensure they are upholding our rules," Reddit told us. ®

More about

TIP US OFF

Send us news


Other stories you might like