In testimonies from the CEOs of Facebook, Google, and Twitter to Congress, published on Wednesday ahead of a hearing this week on internet disinformation, the trio revealed their efforts to remove misinformation from their platforms.
“People want to see accurate information on Facebook, and so do we,” Mark Zuckerberg wrote. “That’s why we have made fighting misinformation and providing people with authoritative information a priority for the company. We have recalibrated our products and built global partnerships to combat misinformation on a massive scale.”
Google’s Sundar Pichai explained the same: “This past year we’ve also focused on providing quality information during the pandemic. Since the outbreak of COVID-19, teams across Google have worked to provide quality information and resources to help keep people safe, and to provide six public health, scientists and medical professionals with tools to combat the pandemic," he said.
"We’ve launched more than 200 new products, features and initiatives—including the Exposure Notification API to assist contact tracing—and have pledged over $1bn to assist our users, customers and partners around the world.”
Likewise Twitter, which peppered this paragraph in its testimony with no less than 10 hyperlinks: “We have COVID-19 and vaccine misinformation policies, as well as a COVID information hub. Our civic integrity and platform manipulation policies are available on our Help Center, along with information on our bans on state-controlled media advertising and political advertising. As a follow-up to our preliminary post-election update, we are conducting a review of the 2020 US election, the findings of which we intend to share.”
Say it ain't so
So it may be surprising that, also on Wednesday, no less than 12 state Attorneys General sent a letter [PDF] to the same tech CEOs accusing them of doing far too little to combat COVID misinformation on their platforms.
“We write to express our concern about the use of your platforms to spread fraudulent information about coronavirus vaccines and to seek your cooperation in curtailing the dissemination of such information,” they state, adding: “Misinformation disseminated via your platforms has increased vaccine hesitancy, which will slow economic recovery and, more importantly, ultimately cause even more unnecessary deaths.”
Misinformation disseminated via your platforms has increased vaccine hesitancy, which will slow economic recovery and, more importantly, ultimately cause even more unnecessary deaths
It blames a “small group of individuals” that “lack medical expertise and are often motivated by financial interests” of being behind a large misinformation campaign that has reached “more than 59 million followers.”
But what of all the wonderful programs that the tech giants have instituted to prevent this very thing from happening? The state AGs recognize that “the updated community guidelines you have established to prevent the spread of vaccine misinformation appear to be a step in the right direction.”
But? “It is apparent that Facebook has not taken sufficient action to identify violations and enforce these guidelines by removing and labeling misinformation and banning repeat offenders. As a result, anti-vaccine misinformation continues to spread on your platforms, in violation of your community standards.”
It then gives a rundown of all the things that the online platforms aren’t doing, including failing to remove “prominent anti-vaxxers” from their services who, according to researchers, account for 65 per cent of the anti-vaccine content out there.
The AGs accuse Facebook of having “failed to consistently apply misinformation labels and popups on Facebook pages and groups” and even name one anti-vaxxer, Larry Cook, who apparently has dozens of different Facebooks groups in an effort to spread his misinformation and, presumably, bypass Facebook controls.
There’s more buckshot – and Facebook gets most of it: “Facebook has allowed anti-vaxxers to skirt its policy of removing misinformation that health experts have debunked, by failing to prevent them from using video and streaming tools like Facebook Live.”
So which is true? Are the tech giants doing a lot and the AGs are overreacting, or are the online platforms simply listing lots of things to look impressive but which aren’t actually effective in the real world?
The truth is likely somewhere in the middle. The problem is that there is no good way to know for sure. Those looking for misinformation find it and report it, and in return the tech giants take it down (while noting that they have taken down much more than is reported). And then people look again, and there’s more of it. Rinse and repeat.
- While the world pushes back against COVID-19, Facebook has a pandemic of a different sort – medical misinformation
- Somebody's Russian to meddle with UK coronavirus vaccine efforts, but GCHQ won't take it lying down
- Being asked to rate fake news may help stop social media users sharing it, study finds
- Boffins devise early-warning system for fake news: AI fingers domains that look sus
For over a decade, the tech giants have argued that this isn’t even a game of whack-a-mole because that game implies that there are a limited number of mole holes whereas in reality the holes stretch as far as the eye can see. This same argument has been repeated yet again in the Congressional testimony this week.
Facebook provides a long list of large numbers: over two billion people to our Covid-19 Information Center; over 140 million people to our Voting Information Center; removing over 12 million pieces of false content; tripled the size of our teams working in safety and security since 2016 to over 35,000 people. And on and on.
Google says the same: our products helped two million US businesses, publishers, and others generate $426bn in economic activity; plans to invest over $7bn in data centers and offices across 19 states, and create at least 10,000 full-time Google jobs in the US; more than 500 hours of video are uploaded to YouTube every minute, and approximately 15 per cent of the searches on Google each day are new to us; we added more than 125,000 voting locations in Google Maps; across our products, these features were seen nearly 500 million times. And on and on.
But the truth is very much simpler: online platforms are as big as they are, and as inundated as they are, because they very specifically make it as easy as possible for people to sign up and post content. It makes them what they are. And they are companies that make billions of dollars in profits thanks to the vast amount of information posted, for free, to their platforms.
Anyone anywhere in the world can start posting misinformation onto YouTube, Facebook or Twitter within minutes if they have an internet connection. And they can keep posting until someone spots one piece of content and complains about it and the tech giant runs the complaint through its system and then removes it.
And then they can post another piece of misinformation. In fact they can keep posting while the first piece is being reviewed. And, when the platform finally decides it’s had enough and shuts down the account; the same person has already created another five, ten, 50, 100 accounts to do the same.
Until the issue of how to limit, track and control the misinformation from being posted in the first place is dealt with head-on, the rest of it is just words and numbers. It’s not at all clear that Congress has figured that out yet. ®