Online harms don’t need dangerous legislation, they need a spot of naval action
It worked on Jolly Roger, it can work on ProudWhiteGuy66373
Opinion Three things on the morning news reliably ruin breakfast for socially aware technogeeks.
- A government deliberately mistaking technology for magic, thereby shifting responsibility.
- Politicians using a national tragedy to push a flawed agenda past scrutiny.
- Organisations using the above to turn a buck.
Last week, many a bowl of cereal turned to ashes in the mouths of the savvy. Following on from the horrific murder of a British MP by a known religiously radicalised assailant, the UK government promptly identified the problem as the coarseness of national discourse, being the fault not of a failure of political leadership, but rather entirely due to online abuse.
Thus, the controversial Online Harms Bill will be rushed into law, quite possibly strengthened by new classes of offence and bans on anonymity. The cherry on top came from tech giants like Amazon, promptly popping up with API-based content auto-moderation services, thus forever blocking social media for the good folk of Scunthorpe and thereby doubling their property prices.
The Online Harms Bill has been floating around in some form or another for ages, with its inherent contradictions of supporting free speech and technological innovations. Meanwhile, trying to specify what sort of either one is acceptable has kept it off the agenda. It has superficially laudable goals but demands impossible means, not unless you adopt Chinese levels of authoritarianism.
Special privileges for politicians...
So guess what? There are calls to make the "vilification of politicians" a special case. Any state that grants its power-brokers special privileges from dissent is inherently corrupt - theocracies that make criticism of the state religion "blasphemy" and those who make any state criticism a libel use these laws not to protect public servants but to silence dissent.
Those who commit vile acts deserve vilification, but politicians have the power to entrench and extend vilemess into our entire culture.
Misogyny, racism, homophobia and threats of violence against anyone online absolutely includes politicians: the ability to say that a politician is vile because their acts crush the lives of the powerless and desperate should never be silenced.
Likewise the calls to enforce ID for users. This breaks down on every level – infosec, effectiveness and sanity – and leads to logical conundrums like: What do you do with users from outside your country? Do you ban their content from UK screens? Do you have an international ID register? Do you fine people who get around the system – and if you can find them to do this, why don't you just do it if they create harm? Who keeps the ID database, and what happens when it's hacked, or a government uses it to harm groups it dislikes, or…
Well, these are all old and unanswerable arguments, but let it never be said that any government surrenders to facts and logic with technology.
There is one powerful tool against technologically disseminated harms that has been proven to work, even against organisations outside the control of the national government. It was used to great effect to close down unauthorised content in the UK some 50 years ago because of a perceived threat to the status quo (one ironically taken advantage of by The Status Quo. It was the Marine &. Broadcasting (Offences) Act 1967.
This shut down most of the pirate radio scene overnight, until then a noisy, popular and unregulated presence on the dial by dint of putting transmitters on boats moored outside UK territorial waters. The Act made it an offence for any UK citizen to provide support to pirate ships, shutting off the tendering of supplies and, crucially, making it impossible for the pirates to sell airtime to UK advertisers. Money, beer and diesel stopped flowing, and the airwaves went a lot quieter.
The Online Harms Bill hasn't got much to say about online advertising. It notes that it's complicated, and that a Home Office working group has talked to folks "comprising representatives from advertising trade bodies, agencies, brands, law enforcement and the Internet Watch Foundation" about advertising, terrorism and child abuse content.
You'll note at once that no online rights or other group representing civil society or the legal profession were involved, but be sure the advertisers, agencies and brands were happy.
- Junior minister says gov.UK considering facial recognition to verify age of p0rn-watchers
- The planet survived six hours without Facebook. Let's make it longer next time
- Sir Tim Berners-Lee and the BBC stage a very British coup to rescue our data from Facebook and friends
However instead of demanding complicated technical approaches to somehow automate the management and judging of human behaviour, imagine what would happen if an Online Harms (Offences) Act said that it would be illegal to help run or fund any UK aspects of organisations found propagating harmful content? That UK servers and CDN nodes could not serve or deliver, and that if Sainsbury's advertising was found on a site causing harm, Sainsbury's would be liable?
This would be technologically agnostic, the regulator could spend its time identifying harms, not mandating or auditing solutions, organisations who provided proper, engaged moderation of content and communities could support anonymous users all day long, and it would do a bang-up job on those newspapers and TV stations who push harmful agendas under the flag of freedom of speech. Sure, keep on saying it, you just can't make money at it.
It's this approach, shifting business models towards responsibility rather than constructing edifices of state control, that works best. Pirate radio – and The Status Quo – continued happily after their content was brought within the BBC and, shortly afterwards, the new independent radio stations, but the dark side of pirate radio, the extremist religious content, faded away to the nerdy wilderness of shortwave.
You won't find the successful appliance of regulation to content mentioned in the debate, such as it is, on regulating online content for harm. It has its own flaws and contradictions, of course, and who knows what a workable system would look like in practice. But its complete absence tells us all we need to know where the power still lies online, and how far we've got to go to stop our crunchy nuts tasting of morning ash. ®