X fails to remove hate speech over Israel-Gaza conflict
98% of posts reported stay up, and Musk's response is to sue the messenger
Elon Musk's X, formerly Twitter, continues to have problems policing hate speech, the Center for Countering Digital Hate (CCDH) has reported, with 98 percent of inflammatory posts about the Israel-Gaza war remaining up a week after being flagged.
In a report published Tuesday, the British non-profit said it identified a sample of 200 posts that breached X's rules "for promoting antisemitism, Islamophobia, anti-Palestinian hate and other hateful rhetoric" posted in the wake of Hamas' attack on Israel on October 7. It reported them using X's moderation tools on October 31, and a week later 196 of the posts were still up. Of the 101 accounts that made the posts, only one was suspended and two had been locked, the group said.
"Hate actors have leapt at the chance to hijack social media platforms to broadcast their bigotry and mobilize real-world violence against Jews and Muslims" in the wake of the Hamas' terror attack, said CCDH founder Imran Ahmed.
"This is the inevitable result when you slash safety and moderation staff, put the Bat Signal up to welcome back previously banned hate actors, and offer increased visibility to anyone willing to pay $8 a month," he added.
Of the 101 accounts identified by CCDH in the study, 43 were verified accounts that "benefit from algorithmic boosts to the visibility of their posts," CCDH said. The remaining posts, it said, had accrued more than 24 million views on X.
The report follows previous work from CCDH on hate speech on the platform formerly known as Twitter, the most recent of which was published in September. That report similarly found that, a week after reporting 300 posts from 100 accounts that contained hate speech, 86 percent of the posts remained up and only 10 accounts had been banned or locked.
X sued the CCDH in August, claiming that its "misleading" reports had chased advertisers away from the company. X also claimed the non-profit had violated its terms of service by scraping data from the platform.
- 'We hate what you've done with the place – especially the hate' Australia tells Twitter
- X marks the spot where free speech clashes with Californian transparency
- Oh dear... AI models used to flag hate speech online are, er, racist against black people
- Musk, Yaccarino contradict each other on status of X's election integrity team
Musk later threatened to sue the Anti Defamation League for similar reasons, claiming the group was responsible for destroying $4 billion worth of Xitter's valuation, or "no less than 10 percent" of the company's value. Based on internal documents from late October, X now values itself at just $19 billion; Musk paid $44 billion for the biz in 2022.
Since Musk's purchase of the platform, hate speech has run rampant across X, leading to the aforementioned advertising exodus that, despite multiple attempts to win advertisers back, hasn't resolved X's shortfalls.
European Union officials last month threatened X with penalties under the Digital Services Act for what they said has been the widespread dissemination of disinformation surrounding the Israel-Hamas conflict.
X actually responds
After more than a year under Musk, we had all but given up on getting a response from X when we reached out for comment, but it broke its automated message streak today, telling us that it was aware of the CCDH's latest report, and countering its claims with an online counterblast and additional comments from an unidentified company executive.
According to X's own data, it has "actioned" more than 325,000 pieces of content that violated its terms of service "in response to the Israel-Hamas conflict." We note the use of the word "actioned," which could mean any number of things including and up to removing posts.
X also claimed it removed more than 3,000 accounts, and has "expanded our proactive measures to automatically remediate against anti-Semitic content."
"As you can read … X has taken action on hundreds of thousands of posts in the first month following the terrorist attack on Israel," the X executive told us.
X only takes action on accounts "for serious violations of our rules," we're told. Instead of banning accounts outright, "the majority of actions that X takes are on individual posts, for example by restricting the reach of a post," it's claimed.
"By choosing to only measure account suspensions, the CCDH will not represent our work accurately," the X-ecutive told us through a spokesperson.
CCDH told us that it didn't only look at account suspensions, and instead "looked at the proportion of the sample of 200 hateful tweets that were still being hosted seven days after we reported them."
X also told us it urges the CCDH to "engage with X first," so it could "provide context or ensure that the proper actions have been taken." The CCDH told us it doesn't typically share its findings with the tech platforms it monitors prior to publication. ®