This article is more than 1 year old
Campaigners warn of an 'algorithm-driven censorship' future if UK Online Safety Bill gets through Parliament
MPs and activists join forces to fight 'dangerous' legal threat
MPs and anti-censorship campaigners have warned that the British government's Online Safety Bill "mistakes the medium for the message" and will result in algorithms censoring anyone who posts something on social media that could get a Silicon Valley company into trouble.
The newly formed group, under the slogan "legal to type, legal to say", is made up of David Davis MP, campaign group Index on Censorship, media law barrister Gavin Millar QC, and others.
They warn that the Online Safety Bill's "duty of care" approach to tech platform regulation will crush the rights of Britons to speak freely and safely online.
Ruth Smeeth, the former Labour MP who is now chief executive of Index on Censorship, said in a statement: "We don't need a sweeping law that will enable private technology companies across the pond to decide what we can see in the UK; that's what our government is for."
The proposed law, previously known as the Online Harms Bill until it was renamed at the last minute before being introduced to Parliament, aims to make social networks, search engines, and other online services with UK users liable for what their users share online.
You can view the draft bill here [PDF].
Those in favour of the legislation, on the other hand, have claimed it is softer on tech giants than internet policy experts recommend, among other things asking tech companies to mark their own work by producing transparency reports, something the tech giants themselves have offered to do in response to competition policy regulation efforts, for example.
Gavin Millar QC, a noted media law barrister, warned:
The bill mandates that [Silicon Valley] design and implement what are called safety policies. This will be done primarily by tech solutions, which is algorithms rather than human beings making judgments about the content. And as others will explain, the definitions of the content they have to manage out of existence in this way are vague, and set at a very low threshold.
He continued: "It's fundamental, it's important to remember that what's at stake here is somebody exercising a fundamental human right."
Platforms will face unprecedented fines of up to 10 per cent of global turnover, which in the cases of Facebook and Twitter alone amounts to tens of billions of pounds. Similar efforts by the EU in its draft Digital Services Act suggested fines amounting to 6 per cent of the global top line.
The power to dish out fines will rest with comms regulator Ofcom. Ofcom's previous experience of internet regulation is minimal, its current remit being to pass judgment on all non-British-state-owned telly channels and radio stations broadcasting within the UK.
Home Secretary Priti Patel said in May when the Online Safety Bill was first unveiled: "It's time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties."
Index on Censorship's Smeeth commented at a press conference this morning:
"Our government is outsourcing decisions on free speech to Silicon Valley while at the same time not giving the police the money they need to prosecute people who make illegal content."
It won't apply to journalists so they won't oppose it… right?
Although the government has granted exemptions for journalists and lawyers (and politicians, naturally), the attempts to carve out space for critics just make the situation worse, Smeeth said.
"The safeguards in the bill… will create two tiers of free speech, free speech for journalists and politicians, and censorship for ordinary citizens," she said.
Echoing this was former Brexit secretary and veteran free speech campaigner David Davis MP, who said: "Silicon Valley providers are being asked to adjudicate and censor legal but harmful content. Now, because of the vagueness of the criteria [in the bill] and the size of the fine... they're going to lean heavily on the side of caution, and anything that can be characterised as misinformation will be censored."
- Chinese app binned by Beijing after asking what day it is on anniversary of Tiananmen Square massacre
- Tiananmen Square Tank Man vanishes from Microsoft Bing, DuckDuckGo, other search engines – even in America
- Big Tech has a big problem with Florida passing a law that protects politicians from web moderation
- Chinese AI censors live-streamed Alpacas – beasts with a very NSFW and political back story
Millar added: "Under the duty of care provisions in chapter two of the bill, the process envisaged by it requires platforms to and I quote, manage and mitigate the risk of harm from illegal content, content harmful to children and content harmful to adults. That is harm as defined in the bill."
Duty of care – great for circular saws, not for speech control
He continued: "And as David said, all this chills free speech. What does that mean? Well, at the end of the day, people providing content on the internet begin to self-censor, to avoid the possibility that their content will be managed offline, and they'll get into all these sorts of problems. And that's a terrible risk from this bill."
This is a reality today. Most people who watch YouTube videos will be aware of the phrases "demonetisation" and "content strikes". YouTube video creators to whom Google takes offence can have their videos hidden from automatic recommendations or ad revenue from their videos, normally shared with them, diverted back into Google's pockets – depriving them of an income stream.
The result is some topics becoming taboo, not because of societal or explicit legal pressure but because of a corporation based in a different legal jurisdiction (or, rather, its outsourced moderators) applying arbitrary standards decides to make it so.
Graham Smith, a lawyer who tweets and blogs as Cyberleagle and has written extensively about the harmful parts of the Online Safety Bill, said this morning that the duty-of-care concept adopted by the government simply wouldn't work as a means of controlling online speech:
If you try to legislate on the basis that a tweet is the same sort of thing as a circular saw or a loose floorboard or a broken pavement, you're going to get into trouble. Why is that? Because speech is not a tripping hazard. Physical injury, which is a subject of traditional health and safety, and the duty of safety related to 'duty of care' is objective – whereas speech is subjectively perceived.
One shouldn't apply concepts designed to deal with objective situations to scenarios where two people can reasonably disagree about whether to take offence to an online post or video, Smith was saying.
The Online Safety Bill is due to make its way through Parliament during the coming months. ®