Analysis Social media giants Twitter and Facebook remain at the end of severe criticism from US Congress and elsewhere as investigations into Russia's interference in America's presidential elections highlight the depth to which the tech giants' platforms continue to be abused.
On Wednesday, both companies said they had agreed to send representatives to a public grilling on November 1 by the Senate Intelligence Committee. Google has yet to respond. That confirmation follows closed-door hearings in which Facebook and Twitter were lambasted for allowing Russian agents to take out and promote huge advertising campaigns aimed at sowing division and spreading fake information in the lead-up to the presidential election.
Last month, Facebook admitted it had received $100,000 in ad spending from 470 accounts connected to Russia, describing the adverts as "amplifying divisive social and political messages." The ads even targeted key battle states, Michigan and Wisconsin.
Two weeks later, and following significant political pressure, it agreed to hand over the details of those ads to Congress, and CEO Mark Zuckerberg wrote a blog post fretting about the role Facebook has played in spreading Russian propaganda.
Facebook subsequently took out a full-page ad in the New York Times saying pretty much the same thing. But how does that square with the fact that Facebook actively pushed for exceptions to election rules that require transparency when people take out political ads?
Attention then turned to Twitter, with the milliblogging website admitting it too had taken hundreds of thousands of dollars for ads that supported Russian media messaging. Twitter bosses said three accounts associated with Kremlin-backed Russia Today spent $274,100 in US ads in 2016. The ads, it said, mostly promoted RT's tweets about its news stories.
For its part, both Twitter and Facebook have tried to argue that they were innocent, hapless parties caught in a sophisticated web by a foreign power. But that argument – which was already raising eyebrows – had been beaten to a pulp when it became clear that for these online giants, ignorance is not a bug – it's a feature.
As we pointed out: revenue is better without responsibility. But that's not how the real world works, and time is rapidly running out for companies worth billions of dollars to claim wide-eyed innocence. Most recently, in the aftermath of the worst mass-shooting in US history, when a lone gunman murdered more than 50 people at a Las Vegas music festival, people were horrified to discover that completely false information about the massacre was being heavily promoted on Google, Facebook and Twitter.
Google's "top stories" section of its website actually linked to a discussion thread on 4chan – yes, that 4chan – that, inaccurately, claimed to have identified the shooter. That led to a demented online search for the wrong person – something that was given a huge boost by the fact the message board thread appeared on Google's news aggregation service as a legit article.
Not my fault, guv
Google claimed it was just those pesky algorithms again promoting an anonymous discussion thread as a proper story. Its statement this week read:
Unfortunately, early this morning we were briefly surfacing an inaccurate 4chan website in our Search results for a small number of queries. Within hours, the 4chan story was algorithmically replaced by relevant results. This should not have appeared for any queries, and we'll continue to make algorithmic improvements to prevent this from happening in the future.
Increasingly lawmakers, and society, are saying that effort to escape responsibility simply isn't good enough.
It wasn't just Google search either. Google-owned YouTube lists a number of dangerously wrong videos at the top of search results for "Las Vegas massacre" and related phrases, including some videos produced by whack-jobs claiming that the whole thing was a "false flag operation" i.e. done by one group pretending to be another. At least one of those videos has more than 1.1 million views.
Facebook did the same. Such fake, inaccurate, and dangerous stories that included pointing responsibility at specific groups were pushed into people's news feeds and so gained an aura of credibility. When questioned over how its platform was being used to plant fake information, Facebook also claimed no responsibility in a statement:
Our Global Security Operations Center spotted the post this morning and removed it. However, its removal was delayed by a few minutes, allowing it to be screen captured and circulated online. We are working to fix the issue that allowed this to happen in the first place and deeply regret the confusion this caused.
Some have already called BS on that response, noting that for many the post being referred to in this case was up for at least 30 minutes. Meanwhile on Twitter, some users posted knowingly fake missing person reports. When one was tracked down and ask why they would do such a thing, the twit told a reporter: "For the retweets :)"
And this is the larger problem: while Facebook and friends claim they have nothing to do with the spreading of fake news ("it was the algorithms, not us") and then defend themselves by pointing out how fast the posts were taken down (implying that the media is overhyping the impact), the reality is that their platforms are actually encouraging the creation and spread of such fake information by failing to do an adequate job in stopping such posts from appearing in the first place.
Facebook, Google and Twitter are attempting the same fake-modesty approach used by newspaper publishers and other media outlets for decades. Despite consciously and deliberately trying to get readers to vote in a specific way in elections, plenty of printed newspapers claimed they had no real impact when push came to shove. Editors would boast about their influence on front pages, and then, in the face of looming press regulation or criticism, quietly argue they didn't have any actual power.
Except that enormous power was held largely in check through effective competition and various laws, especially in the UK. Neither of those exist in any real capacity with these online giants: they have enormous, almost monopolistic control of billions of digital eyeballs and they rely on safe harbor provisions to disown any responsibility for what appears on their platforms. We didn't write it, our users did.
That's not good enough when the societal impact is so significant. And lawmakers know it. As just one tiny example, here is an article from Veterans Today (not the most reputable publication but even so) that digs into the likelihood of the Las Vegas shooting being a false flag operation. These articles wouldn't – and shouldn't – exist were it not for the spread of fake news through social media accounts and the promotion of that content on the platforms themselves.