SCOTUS rules Google and Twitter didn't contribute to terrorist attacks

And holds off on Section 230 for another time

The US Supreme Court has ruled that Google and Twitter did not break the nation's Anti-Terrorism Act by publishing and recommending content that supported the Islamic State terrorist organization, also known as ISIS.

In a Thursday decision, the justices unanimously sided with Big Tech in cases Twitter Inc v. Taamneh and Gonzales et al v. Google..

The cases were brought by the families of Nohemi Gonzalez and Nawras Alassaf, who died in ISIS terrorist attacks in Paris and Istanbul in 2015 and 2017, respectively. The families sued Twitter, Google, and Facebook under a provision of the Anti-Terrorism Act that allows those who have been injured by acts of terror to seek civil damages. The plaintiffs accused the tech giants of contributing to the deaths of their family members by recommending terrorist propaganda and recruitment material to users of their platforms.

While the suits concerned the Anti-Terrorism Act, Google's defense relied in part on Section 230 of the Communications Decency Act, which more or less protects internet companies from liability for content generated by their users. There are some caveats.

Crucially, Section 230 states: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." The idea here is that websites and apps that allow people to communicate with each other shouldn't normally be held liable for the content of that communication.

In the opinion for Twitter Inc v. Taamneh, Justice Clarence Thomas declared that plaintiffs had failed to prove a direct link, showing that pro-ISIS content on the social media platform led to the 2017 attack in the Reina nightclub in Istanbul.

"As alleged by plaintiffs, defendants designed virtual platforms and knowingly failed to do 'enough' to remove ISIS-affiliated users and ISIS-related content – out of hundreds of millions of users worldwide and an immense ocean of content – from their platforms," he wrote [PDF].

"Yet, plaintiffs have failed to allege that defendants intentionally provided any substantial aid to the Reina attack or otherwise consciously participated in the Reina attack – much less that defendants so pervasively and systemically assisted ISIS as to render them liable for every ISIS attack. Plaintiffs accordingly have failed to state a claim under [the Anti-Terrorism Act]."

The court came to a similar conclusion in Gonzales et al v. Google.

"Since we hold that the complaint in that case fails to state a claim for aiding and abetting under [the Anti-Terrorism Act], it appears to follow that the complaint here likewise fails to state such a claim," they concluded [PDF].

"Countless companies, scholars, content creators and civil society organizations who joined with us in this case will be reassured by this result," Halimah DeLaine Prado, general counsel at Google, told The Register in a statement.

"We'll continue our work to safeguard free expression online, combat harmful content, and support businesses and creators who benefit from the internet."

Twitter did not respond to a request for comment.

Although Big Tech has successfully dodged liability claims, is it a win for Section 230 and its safeguards for internet platforms?

Not exactly. The Supreme Court's decisions only show that they did not find a direct link between the ISIS attacks and Google and Twitter recommending pro-terrorist videos or posts – not that the companies were protected by Section 230.

"We think it sufficient to acknowledge that much (if not all) of plaintiffs' complaint seems to fail under either our decision in Twitter or the Ninth Circuit's unchallenged holdings below. We therefore decline to address the application of [Section 230] to a complaint that appears to state little, if any, plausible claim for relief," the court wrote.

The decision therefore offers little to advance the debate about whether Section 230 should be modified – an idea advanced by both sides of the political world on the grounds that the internet is full of fake news, child abuse material, and other contentious content that internet companies should perhaps do more to filter or combat. Calls for reform have, however, collided with concerns about the potential harm to free speech and the ability to run massive communication systems. ®

More about

TIP US OFF

Send us news


Other stories you might like