This article is more than 1 year old

Internet giants removing 70 per cent of reported hate speech, crows European Commission

But we might still drum up some new regs, so keep it up

Tech firms are removing more hate speech faster than before – so now EU lawmakers want them to improve their feedback to users.

According to the European Commission's latest review of the big four internet firms' action against illegal content online, the removal rate is, on average, 70 per cent of content reported.

That's an increase from the last assessment, carried out in May 2017, where the removal rate was 59 per cent, and a big boost from the first such report, in 2016, when it was 28 per cent.

The Commission praised the progress, but was quick to point out Facebook et al's flaws, saying that feedback to users is "still lacking for nearly a third of notifications on average" and must be improved.

The review covers the companies that have signed up to the European Union's Code of Conduct – a voluntary set of rules with the aim of countering the spread of illegal hate speech online.

Facebook, YouTube, Twitter and Microsoft have been members since May 2016; it now also covers Instagram, and Google+ has joined today.

Signing up to the code is another outward indication that the firms are taking online hate speech seriously – businesses are falling over themselves to be seen to be doing something, pushed on by the near-constant threat of extra legislation.

But it isn't binding; rather it is a pledge to say they will assess whether reported hate speech breaks community guidelines, or national and EU laws.

There is also an onus on the firms to deal with it promptly, which has become increasingly used as a threat against big biz – Germany added a 24-hour timeframe into its recent legislation, while the British Prime Minister has called for it to happen in two.

The EU review found that all of the companies "fully meet the target of reviewing the majority of notifications within 24 hours", with an average of 81 per cent being reviewed in that time, up from 51 per cent in 2017.

Although the Commission argues that the code is simply to encourage firms to take quick action against content that is already illegal, the increased pressure on firms to act fast has led to concerns about censorship.

For instance, Graham Smith, partner at Bird & Bird, told an event in London this week that content might end up "presumed to be guilty because it's accused".

And Karim Palant, UK public policy manager for Facebook, suggested that increased pressure from legislators might encourage companies to be overly cautious.

Describing the German model as "clumsy and ineffective", he said that the "net effect" of threatening fines – with no equivalent pressure to protect legal speech – would be "huge erring on the side of caution".

However, Adam Kinsley, director of policy at Sky, said that codes of conduct "need to be underpinned by legislation for it to have teeth", along with an independent regulatory system to oversee compliance.

"I don't think platforms should be afraid of this oversight, all other businesses are subject to it," he added.

Certainly, the Commission has promised it will continue monitoring the tech firms and once again dangled the threat of extra legislation – or in EU-ese, "additional measures" – if "efforts are not pursued or slow down". ®

More about


Send us news

Other stories you might like