This article is more than 1 year old

Road to nowhere: UK plans for an 'AI assurance industry' but destination is unclear

Govt hopes for 'mature, world-class' sector, but doubts linger over regulation

The UK government's Centre for Data Ethics and Innovation (CDEI) has published a "roadmap" designed to create an AI assurance industry to support the introduction of automated analysis, decision making, and processes.

The move is one of several government initiatives planned to help shape local development and use of AI – an industry that attracted £2.5bn investment in 2019 – but it raises as many questions as it answers.

Part of the Department for Digital, Culture, Media & Sport (DCMS), the CDEI said by "verifying that AI systems are effective, trustworthy and compliant, AI assurance services will drive a step-change in adoption, enabling the UK to realise the full potential of AI and develop a competitive edge."

Launching the move, DCMS minister Chris Philp said: "The roadmap sets out the steps needed to grow a mature, world-class AI assurance industry. AI assurance services will become a key part of the toolkit available to ensure effective, pro-innovation governance of AI."

How that governance will take shape is, as yet, a bit fuzzy while the industry waits on proposals for AI legislation in the forthcoming White Paper on governance and regulation.

Whatever laws the assurance industry is expected to mitigate against breaching, the idea is that third-party AI assurance providers will offer reliable information about the trustworthiness of AI systems, according to the launch document.

The "roadmap" - awful word, we know - calls for all players in the AI supply chain to "have clearer understanding of AI risks and demand assurance based on their corresponding accountabilities for these risks."

"AI assurance will be critical to realising the UK government's ambition to establish the most trusted and pro-innovation system for AI governance in the world, set out in the National AI Strategy," the document says.

Elsewhere in Whitehall, the Central Digital and Data Office has developed an algorithmic transparency standard for government departments and public-sector bodies. Working with the CDEI, the standard would be piloted by several public-sector organisations and further developed based on feedback, it said.

IT analyst group Forrester has released its own proposals to help businesses navigate something it calls "AI fairness", a broad concept designed to help organisations travel the dire regulatory, reputational, and revenue impacts of getting AI wrong. "As fairness in AI is a relatively new concept, regulations explicitly dictating a specific fairness metric are lacking and best practices are just emerging," it said.

Martha Bennett, Forrester veep and principal analyst, said the problem in the UK's case was that efforts to develop an AI strategy and assurance industry were disjointed by reform to data protection laws, which would govern the use of personal data in developing machine learning models and describe individuals' rights in their relationship with AI.

Talking about the reforms in August, UK's then Secretary of State for Digital Oliver Dowden promised "a bold new data regime" following the kingdom's departure from the EU. It would "unleashes data's power across the economy and society for the benefit of British citizens and British businesses," he trilled.

When launching the consultation on the reforms, the government said it was considering removing individuals' right to challenge decisions made about them by AIs, a move that attracted criticism.

Bennett said: "It's almost like they haven't joined the dots somehow. They're talking in this proposed UK Data Protection revision about amending the right not to be subject to a decision based solely on automated processing and I've even heard people say that loosening up on those particular requirements could give the UK a competitive advantage.

"But that to me is a dangerous path to take and to me is a real crunch point because it goes in the opposite direction of where everyone else is going in what we call the explainability of AI models. You should always be in a position to defend a decision. If an individual feels that the decision has been unfairly taken, they should be able to get an explanation and it is possible to make AI systems explainable because you know what the inputs are."

The UK's National Data Guardian (NDG), who addresses use of health data, also warned against watering down individuals' rights to challenge decisions made about them by artificial intelligence.

"The NDG has significant concerns about proposed reductions to existing protections and the ability of professionals, patients, and the public to be actively informed about decisions that can have significant impacts for them," said Dr Nicola Byrne.

Other leading figures in AI ethics argue for a broader view still. Timnit Gebru, co-lead of Google's Ethical AI team before her controversial departure, said effective AI regulation should start with labour protections and antitrust measures to guard against overly powerful monopolies.

"I can tell that some people find that answer disappointing – perhaps because they expect me to mention regulations specific to the technology itself. While those are important, the number one thing that would safeguard us from unsafe uses of AI is curbing the power of the companies who develop it and increasing the power of those who speak up against the harms of AI and these companies' practices," she wrote in The Guardian.

Gebru – now founder and executive director of the Distributed AI Research Institute – also voiced concerns that big tech companies leading the AI charge could also exert undue influence on government policy.

"I noticed that the same big tech leaders who push out people like me are also the leaders who control big philanthropy and the government's agenda for the future of AI research. If I speak up and antagonize a potential funder, it is not only my job on the line, but the jobs of others at the institute," she pointed out.

It is notable in this context that the UK government's AI strategy was launched with a quote from DeepMind, the UK-based AI outfit owned by Google, which ousted Gebru.

Whatever the government means by creating a "roadmap for a mature, world-class AI assurance industry," questions remain about what exactly organisations and businesses are to assure against. And that's not very reassuring. ®

More about

TIP US OFF

Send us news


Other stories you might like