This article is more than 1 year old

UK's AI fairy tale sets out on its yellow-brick roadmap

But what Faculty lies behind the plans for adoption and economic expansion?

The UK's AI Council could not have picked a worse week to launch its roadmap. As the world's media was understandably obsessing with the US panto-cum-insurrection season, who would highlight its attempt to put this island nation, newly unshackled from the EU, on a path to a 10 per cent GDP boost from AI by 2030?

Observers might have hoisted a few red flags when they looked at who is behind the body, charged with providing independent input to the UK's AI strategy, expected to come from the Office for Artificial Intelligence (a joint unit between the Department for Business, Energy and Industrial Strategy and the Department for Digital, Culture, Media and Sport).

The AI company Faculty, something of a bête noire with the leftish media, has stuck its fingers in the pie of the AI Council, which has produced the AI Roadmap [PDF].

The company was a supplier to the Vote Leave campaign during the 2016 referendum deciding the UK's departure from the EU. As such, it became embroiled in accusations over the use of Facebook data and the Cambridge Analytica scandal, although it said it never "worked formally or informally with Cambridge Analytica" and denied its work has ever involved the use of "private Facebook data or so-called 'micro-targeting'."

Any concerns about Faculty's involvement may be down to paranoid media types disgruntled at a democratic decision with which they did not agree.

Nonetheless, the AI Roadmap talks about the need to "ensure public trust through public scrutiny."

"The UK must lead in finding ways to enable public scrutiny of, and input to, automated decision-making and help ensure that the public can trust AI," it said.

Far from leading the way, it took a legal campaign to crowbar any openness from the government in detailing how the NHS works with Amazon, Microsoft, Google, Faculty, and Palantir, the controversial US AI firm awarded an NHS data contract without inquiry.

Meanwhile, there has been a distinct lack of openness about why Dominic Cummings, former chief advisor to the UK Prime Minister, paid more than a quarter of a million pounds to Faculty over two years, according to reports.

But why not Faculty? It has proved successful in its field and is a leading UK AI firm. Then again, so has Onfido, the startup with more than $180m in funding focused on fraud prevention, and Graphcore, the Bristol-based AI chip designer that raised $222m to value it at $2.5bn over the Christmas period. Perhaps they are too busy being successful to be on the AI Council.

Anyway, back to the UK AI Roadmap. It said the "next few months and years will be crucial in determining where the UK places its desired level of ambition in AI."

A global arms race is afoot. Germany has committed €3.1bn to its AI strategy, France forked out €1.5bn up to 2022, and the US $1bn, the document explained.

"AI is a sovereign capability underpinning UK prosperity, security, resilience, diversity and sustainability," said Professor David Lane, founding director of the Edinburgh Centre for Robotics and an AI Council member.

To rise to the challenge, the roadmap sets out 16 fairly prosaic recommendations for a UK AI Strategy, which cover research, skills, data infrastructure and trust, and cross-sector adoption.

Since we're in the middle of a world-changing pandemic, the roadmap says the government should "build on the work of [NHS tech body] NHSX and others to lead the way in using AI to improve outcomes and create value in healthcare. The UK's comparative advantage will depend on smart strategies for data sharing, new partnership models with SMEs and skill-building."

Value for whom? some might ask. And they have asked already.

In 2017, the Information Commissioner's Office found London's Royal Free Hospital had failed to comply with data protection law in sharing patient data with DeepMind, a Google subsidiary incidentally also involved with the AI Council.

Canadian immunologist Sir John Bell, then chairman of the Office for Strategic Coordination of Health Research, described the case as the "canary in the coalmine". "I heard that story and thought 'Hang on a minute, who's going to profit from that?'" he said.

In any case, any practical step guiding AI in healthcare will have to wait for the National Health and Social Care Data Strategy, which isn't out yet.

Sam Smith, co-ordinator of campaign group medConfidential, said: "So-called 'smart' strategies for AI are some way off in health care, but that won't stop VCs and 'innovators' on the AI Council suggesting money moves away from things that work towards their pet projects that don't."

While private-sector companies like Google's DeepMind, MasterCard, and Faculty are represented on the AI Council, so are universities and the Centre for Data Ethics and Innovation, a government body.

But work on AI in medicine has already come from elsewhere. The Academy of Medical Royal Colleges, for example, has published its "Artificial Intelligence in Healthcare" report, but it does not factor in the AI Roadmap or the AI Council.

It should because the report contains a warning. "The 'social licence' that AI enjoys so far is a precious commodity. Historic controversy over genetically modified food perhaps demonstrates the consequences when the trust between science and the wider public breaks down. It should also serve as a warning to AI developers that they should not take public acceptance and trust for granted," it said [PDF].

The point was underscored by a 2020 case involving chatbot provider Babylon Health, which branded Dr David Watkins, consultant medical oncologist at Royal Marsden NHS, a "troll" for pointing out shortcomings in its system. Babylon Health is since said to have improved its systems.

Meanwhile, UK medical professionals are sweating buckets behind PPE, literally risking their lives in a face of a pandemic. They are the people key to AI adoption in the field but who are unlikely to care about the view of an AI Council on which they are under-represented.

The make-up of the council is set to raise suspicion that it will be used to justify whatever the government wants to do while doing little to assure the public or professional trust. The risk is its roadmap goes nowhere. ®

More about

More about

More about

TIP US OFF

Send us news


Other stories you might like