This article is more than 1 year old
We take a look at proposed Big Tech regulations in the UK: Heavy on possible fines, light on enforcement
Online Harms draft gets most things right, still gives Facebook and friends too much leeway
Analysis Tech giants face massive fines of up to 10 per cent of their annual revenue if they fail to follow new rules aimed at reducing the amount of harmful content on their platforms, the UK government has decided.
The Online Harms white paper, published Tuesday, outlines up the government’s approach following more than 18 months of consultation and will form the basis of new legislation to be introduced in the new year.
In it, the government will give the job of developing new standards for taking or restricting illegal and harmful content, as well as investigating and enforcing those rules, to existing regulator Ofcom, with it likely to fund the additional work by fining companies that fail to meet the new rules.
The proposed 10 per cent fine (or £18m, whichever is highest) is significantly higher than the 6 per cent that the European Union proposed in its equivalent Digital Services Act, also launched today and has become the headline figure for coverage of the paper. But behind that, the paper is softer on tech giants that many internet policy experts had recommended.
The most striking example is in the promotion of “transparency reports” - something that gets its own report. Such reports are a consistently promoted policy solution by the tech companies themselves but have had little real impact on the work practices and cultures at those companies. Stats do not always drive actions, especially if there is no independent verification of how accurate they are.
Not great results
When the European Commission reviewed the self-assessment reports provided by Facebook, Google, Microsoft, Mozilla and Twitter under its Code of Practice on Misinformation, in October 2019, it concluded that they “provide little insight on the actual impact of the self-regulatory measures taken over the past year as well as mechanisms for independent scrutiny.”
Likewise, the takedown reports issued by the same social media giants in response to the “Santa Clara Principles” created in 2018 - which was another effort to get the monster companies to open up - has resulted in useful data, but little or no actual insight into how they operate or pressure to change their business drivers and corporate cultures. The problems have persisted.
UK comms regulator: Could we interest sir in a bespoke broadband speed estimate?READ MORE
As a result of these failures to effect actual change, internet policy experts have been arguing for a stricter regime in which independent evaluators are given the right to check on the figures as well as form their own assessments and recommend new metrics or approaches.
The Tony Blair Institute for Global Change has recommended bringing in auditors similar to long-standing practice in the financial industry. The Internet Commission has also argued for independent evaluation and developed a series of qualitative and quantitative indicators for content moderation.
The UK government has stopped well short of this approach, raising the likelihood that the tech giants will continue to string out real reforms for a few more years.
That said, the government does foresee giving Ofcom the necessary powers to insert itself into that process if it doesn’t result in real results. Ofcom will be able to decide its own metrics and measures and it will have the power to enter offices and insist on being given information.
Just as importantly - and something experts are pleased to see - the UK government foresees Ofcom hiring outside experts to do some of its work, especially on the technical side. This is critical if it is to avoid the significant problem of those looking at the tech giants not having a real understanding of how their systems and the underlying technology actually work. In the past, this has enabled tech giants to run rings around those trying to induce change.
The Online Harms paper also gives tech giants the benefit of the doubt when it comes to making details of its systems available to researchers and academics. The paper foresees the tech companies being obliged to hand over access to such data, albeit confidentially, but seemingly ignores the fact that Facebook has been repeatedly criticized by those very academics for purposefully making it hard for them to gain useful information from the system it set up.
Also unfortunate is the failure to grasp that ultimately nothing will change until independent evaluators are allowed to see and understand how the tech giants’ algorithms actually work. The tech giants have been extremely protective of their “secret sauce” noting its huge commercial worth. Which is true, but there have been systems in place for decades that enable evaluators to learn about commercially valuable information without disclosing it.
Until it is possible for regulators to see how the sausage is made, it will be impossible to know what users and society is eating on a daily basis. There are only two mentions of the word “algorithm” in the Online Harms paper, neither addressing access. The accompanying transparency reports report notes their importance but doesn’t push for access.
It notes: “Transparency around the use of algorithms is also an important part of the equation. It is important that the regulator and users understand the impact that use of algorithms may have, but the information that will be valuable is likely to differ between these audiences… Transparency reports may not be the appropriate vehicle for certain information about the use of algorithms. There is value in the regulator having the power to conduct audits in relation to companies’ internal systems and processes.”
The good stuff
There are however, many good aspects to the report. For one, the UK government has stepped back from introducing criminal charges for tech execs, noting that it will introduce any such measures in subsequent legislation if it is deemed necessary.
The paper also includes several key phrases such as “risk-based” and “duty of care” which stem from work done by the Carnegie Trust. The Trust has argued that rather than trying to regulate content directly and force companies to assess and be assessed on individual videos, pictures or posts - something that is borderline impossible given the vast amount of content published on these platforms every day - the key is to regulate the systems behind the content that make decisions about what to do with each piece of content.
Leaked draft EU law reveals tech giants could face huge 6% turnover fines if they don't play by Europe's rulesREAD MORE
That form of regulation would be built around a statutory duty of care that is enforced by a regulator that balances rights - including, significantly human rights and freedom of expression. The regulator would assess how companies address the risk of harmful content being published, rather than coming down on companies if one piece of harmful content slips through.
It’s an approach the UK government appears to have fully embraced. From the White Paper: “As an independent country, the UK has the opportunity to set the global standard for a risk-based, proportionate regulatory framework that protects citizens online and upholds their right to freedom of expression.”
The other big pluses from the approach outlined, contrasted to what exists currently, are: a new code of conduct specifically focused on the tech companies’ responsibilities toward children; the inclusion of misinformation into the sort of content that the regulator will review; and the requirement for companies to create and publish terms and conditions that they will be fined over if they breach.
But some consumer advocates remain annoyed that the government did not add online scams as content that should also be regulated.
Delaying the inevitable
In short, the government has tried to tread a fine line between making sure there are real measures and powers that will cause tech giants to look at their systems seriously while giving them the benefit of the doubt that they will make real changes rather than give the law lip service.
Ofcom will be given the powers to intercede and make things painful for the companies if they don’t address the longstanding issues over harmful content but it will have to make that determination first and then move to act.
In that sense, the UK government has given the tech giants far more credit than they deserve. It is, sadly, inevitable that Facebook in particular will do everything in its power to avoid making real changes. It won’t be a matter of whether but when Ofcom will need to act against it.
This white paper punts that inevitable confrontation into the future. But it does at least ensure that the regulator can win. ®