This article is more than 1 year old

Loadsamoney: UK mulls fining Facebook, Twitter, Google for not washing away filth, terror vids

MPs suggest 'system' of punishments in web crackdown

Updated An influential panel of UK MPs have proposed fining the likes of Facebook, Google and Twitter if they fail to remove illegal content within a certain timeframe.

Parliament's Home Affairs Committee published a report on Monday that was highly critical of the US tech giants for failing to take down content such as terrorist or extremist videos, neo-Nazi propaganda, and material that promoted child sexual abuse or incited racial hatred.

It concluded that the American goliaths "must be held accountable," and reached the same conclusion as the German government did last month: the only thing that the companies will pay attention to is being fined.

"Given their immense size, resources and global reach, the Committee considers it 'completely irresponsible' that social media companies are failing to tackle illegal and dangerous content and to implement even their own community standards," an official summary of the select committee report notes.

The report found that even when companies like Facebook were warned about illegal content, they often took a long time to remove it and even then, similar videos were instantly discoverable by searching on certain key words or organizational names.

Aside from recommending a "system of fines," the report recommends that social media companies be obliged to "proactively search for and remove illegal material." If they refuse, the report proposes that the UK police do the job and effectively bill the companies for their time.

The select committee also wants regular transparency reports that highlight what each company is doing, how many staff it has, what its policies and approaches are, and metrics around the removal of such illegal content.

Like football but different

Giving some quick examples of the sort of material that has remained online despite complaints, the report uses the analogy of a football team to explain why Facebook et al should pay for their own policing.

"Football teams are obliged to pay for policing in their stadiums and immediate surrounding areas on match days," it argues. "Government should now consult on adopting similar principles online – for example, requiring social media companies to contribute to the Metropolitan Police's CTIRU [counter-terrorism internet referral unit] for the costs of enforcement activities which should rightfully be carried out by the companies themselves."

The report does not mince its words, arguing that the tech companies are "shamefully far" from where they should be and arguing that their current approach is "completely irresponsible and indefensible."

"If social media companies are capable of using technology immediately to remove material that breaches copyright, they should be capable of using similar content to stop extremists re-posting or sharing illegal material under a different name," the report argues.

"We believe that the Government should now assess whether the continued publication of illegal material and the failure to take reasonable steps to identify or remove it is in breach of the law, and how the law and enforcement mechanisms should be strengthened in this area."

It concluded: "No longer can we afford to turn a blind eye."

The report comes just weeks after an attack in the heart of London left five people dead and many more injured. That incident led to numerous criticisms of the role of social media companies in hosting extremist material and a quick meeting of tech companies with the prime minister that was criticized for achieving nothing.

It's not immediately clear what will happen with the report, however, thanks to the rapidly approaching surprise general election, unexpectedly called by prime minister Theresa May last week.

The Home Affairs Committee made it plain that its report had been published earlier than planned due to the election. "The announcement of the General Election curtailed the Committee's consideration of the full range of issues in this inquiry, and the recommendations have had to be limited to dealing with online hate, arguably the most pressing issue which needs to be addressed now," it notes.

"However, it is hoped that the successor committee in the next Parliament will return to this highly significant topic and will draw on the wide-ranging and valuable evidence gathered in this inquiry to inform broader recommendations across the spectrum of challenges which tackling hate crime presents."

It is all too possible that in June, an entirely new political party will be in charge and so the committee may have different members and leaders.

Meanwhile in Europe

The approach is very similar to ones proposed last month in other parts of Europe.

The German government put forward legislation that would see Facebook, Twitter and others fined up to €50m ($53m) for failing to remove slanderous fake news and hate speech within 24 hours.

Interior minister Heiko Maas said the bill was designed to "combat hate crime and criminal offenses on social networks more effectively" and would cover "defamation, slander, public prosecution, crimes, and threats."

And just a few days later, the European Commission also threatened fines – this time if the US-based companies did not change their terms and conditions to allow them to be sued in their users' countries rather than the state of California.

For the companies themselves, the issue of removing content is seen through a very different lens: that of the United States' constitutional First Amendment. What can often be illegal in Europe – thanks to its history of violent extremism – is protected under free speech laws in the United States.

But under pressure from other governments, especially given the recent rise in violent acts carried out in the name of extremist organizations, Facebook, Google, Twitter and others have made big efforts to handle the flood of information produced every day by their millions of users.

However those efforts have been largely focused on figuring out ways to hide content rather than delete it. That approach is not good enough for European lawmakers, who face the rise of far-right political organizations and whose nations have suffered a slew of terrorist attacks in recent years.

We have asked Facebook, Google and Twitter for their responses to the proposal. We will update this story if they respond. ®

Updated to add

Twitter told us that its rules already ban hateful conduct and abuse and that it has "significantly expanded" its efforts to take such content down, as well as locate and shut down users that post such material or harass others.

The company also highlighted its recent transparency report, where it noted that 74 per cent of over 630,000 "terrorist accounts" it identified had been removed by Twitter's own technology, and just two per cent were reported by police. And it drew our attention to comments by Essex's chief constable, who noted that the Computer Misuse Act is now 26 years old – and not suitable for the internet age.

Twitter said the issue was an "ongoing process" but perhaps pointedly noted that it was "listening to the direct feedback of our users."

More about

TIP US OFF

Send us news


Other stories you might like