UK aims for 'openness and fairness' in its AI Strategy – unless we're talking about favoured contractors

Or the government itself, of course


It has been more than a month since the launch of the UK government's AI Strategy which, the authors said, "represents the start of a step-change for AI in the UK," and The Register, for one, has not forgotten.

While the strategy promises to "embed" supposed British values such as fairness and "openness" in the development and use of AI in the UK, events leading up to its launch, and in particular the behaviour of our government, tell a rather different story, one which could be worrying considering the likely impact of AI on society and the economy.

Some of the moves made by the UK over the first 18 months of the pandemic took place under the cover of emergency legislation, including deals inked by the government with a host of private tech firms in March 2020 to help deliver the NHS COVID-19 response.

One of these was the NHS COVID-19 data store, a project bringing together disparate medical and organisational data from across the national health service, with US spy-tech firm Palantir at the heart of it – although Google, Amazon, Microsoft and AI firm Faculty all hold contracts to work on the platform. Planners of the government response team were said to have found it useful, but it also attracted controversy. Then, in December last year, the contract was extended for another two years, again without scrutiny.

In May this year, a broad-based campaign group wrote to (then) UK Health Secretary Matt Hancock (yes, "vast majority" of the UK are "onside" with GP data grab Hancock). The letter called for greater openness around the government's embrace of this gang of private technology vendors. The campaigners soon found they had to threaten court to get the private-sector contracts published after those contracts were awarded without open competition.

Faculty and its CEO, Marc Warner, for one, had no trouble getting close to government circles, where the UK's leaders might be asked to be more mindful of asking private sector players to help them with the business of governance.

According to the testimony of former chief advisor to the Prime Minister, Dominic Cummings, in front of the Health Select Committee, the CEO was present during much of the decision-making in the crucial early stages of the pandemic, when Cummings was still advising the PM.

Reports from The Guardian – which Warner would later fail to deny – suggested he used his relationship with Cummings to influence Whitehall. "It felt like he was bragging about it," a senior source said, adding Warner would casually tell officials: "Don't worry, I'll text Dom," or "I'm talking to Dom later."

Faculty said Warner wanted to talk to The Register to give his views on the government AI strategy in the week leading up to publication of the policy document, but later he was unable to speak to us. He wasn't the only one. A host of other key private and public figures who'd normally cheerfully provide their take found themselves speechless.

It's fair to say that on Faculty's part, it might not be able to speak to speak to us because of the terms of its contract or due to concerns over commercially sensitive information, we don't know. What we do know is that a £2m Home Office contract was awarded to the firm without competition, for Innovation Law Enforcement (I-LE).

The tender documents offered few details about how AI might be used in law enforcement and when asked, the Home Office simply said: "We are unable to share further information since it's commercially sensitive."

So much for openness.

We are hoping to get more information from the private firm, which one could argue is less duty-bound than our country's leadership to give it to us. We have sent a list of questions via the company's PR firm. Given Faculty's history, and reports about its government contracts, it seems fair to ask, for the sake of openness, how many public-sector contracts it has been awarded and how many of those were awarded after open competition. It did not respond to these questions specifically.

It did, however, provide a statement saying: "Faculty is a highly experienced AI specialist that has delivered work for over 350 clients across 23 sectors of the economy and in 13 countries. We have strong governance procedures in place and all of our contracts with the government are won through the proper processes, in line with procurement rules."

Openness in government contracting is not only a question of fairness. If the UK is serious about developing the nation's industry in AI – or indeed any high-tech industry – it needs fair and open competition for the billions of taxpayer pounds it spends in the tech market.

Google's AI subsidiary DeepMind was also closely involved in the UK's pandemic response.

DeepMind co-founder Mustafa Suleyman, now veep for AI policy, was reportedly approached by NHSX to help work with patient data, including discussing whether Google's cloud products were suitable for its data store project. In his role as chief advisor to the prime minister, Dominic Cummings brought Demis Hassabis, CEO and co-founder of DeepMind, into the heart of government decision-making, according to his select committee testimony [PDF].

Public procurement – what can go wrong

What's at stake when emergency contracts – not just to Palantir and Google and the like, but to many other vendors during the pandemic – escape scrutiny or circumvent the usual bidding and tendering process?

Peter Smith, former president of the Chartered Institute of Purchasing and Supply, told The Register that studies of countries including South Africa had shown that favouritism and nepotism in public procurement means suppliers can tend to either withdraw from the market or cut investment in technology, products and services, and instead put the money into employing an ex-minister as a non-exec or as an advisor, and wining and dining special government officials.

He went on to say that the recent spate of stories about a lack of openness in government contracts could damage how the UK is seen as a place to invest.

"We're in danger of moving from a country where we felt public procurement was in the upper quartile in the world, to a place where we're slipping down the league table," said Smith, who works as a consultant, having held senior roles in the public and private sector.

The picture in public procurement could then cut against government ambitions in AI – and it is not just Faculty that has a close relationship with the government and is involved with the government AI strategy. As mentioned, Google was part of the group on the NHS COVID-19 data store deal, and again this required the pressure of legal letters to have it aired in the public domain.

British government's AI strategy and citizens' data rights

DeepMind got prime spot on the press release for the UK AI Strategy, under the banner of a "new 10-year plan to make the UK a global AI superpower."

"AI could deliver transformational benefits for the UK and the world – accelerating discoveries in science and unlocking progress," Hassabis said in the pre-canned publicity material.

Part of the UK's vision for its AI strategy is an industry "with clear rules [and] applied ethical principles."

But Google, DeepMind's parent company, has found it difficult to get out of the AI ethics quagmire.

A UK law firm is bringing legal action on behalf of patients it says had their confidential medical records obtained by Google and DeepMind in breach of data protection laws. Mishcon de Reya launched the legal action in September 2021, saying it plans a representative action on behalf of Andrew Prismall and the approximately 1.6 million individuals whose data was used as part of a testing programme for medical software developed by the companies.

DeepMind worked with Google and the Royal Free London NHS Foundation Trust under an arrangement formed in 2015. In 2017, Google's use of medical records from the hospital's patients to test a software algorithm was deemed legally "inappropriate" by Dame Fiona Caldicott, National Data Guardian at the Department of Health.

Law firm Linklaters carried out a third party audit on the data processing arrangement between Royal Free and DeepMind, and concluded their approach was lawful.

At the same time, former co-lead of the Chocolate Factory's "ethical artificial intelligence team" Timnit Gebru left under controversial circumstances in December last year after her managers asked her to either withdraw an as-yet-unpublished paper, or remove the names of employees from the paper.

In her time since leaving the search giant, Gebru has marked out a stance on AI ethics. In a recent interview with Bloomberg, she said labour and whistleblower protection was the "baseline" in terms of making sure AI was fair in its application.

"Anything we do without that kind of protection is fundamentally going to be superficial, because the moment you push a little bit, the company's going to come down hard," she said.

Among the lost list of organisations and companies adding their names to the UK government's AI Strategy, who would back her stance?

We asked DeepMind, Benevolent AI CEO and co-chair of Global Partnership on Artificial Intelligence Joanna Shields, Alan Turing Institute professor Sir Adrian Smith, CEO of Tech Nation Gerard Grech, president of techUK Jacqueline de Rojas, and Nvidia veep David Hogan if they had thoughts on the issue.

None of them responded to the specific point, although we have included the responses we did receive in the box below.

While the UK has legal whistle-blower protection in certain scenarios, it only applies to law-breaking, damage to the environment and the health and safety of individuals. Where the law is unclear on AI it is uncertain what protection whistleblowers might get.

Meanwhile, proposals from the Home Office suggest a public interest defence for whistleblowing might be removed.

The only way is ethics

On the questions of AI ethics, the focus has been on data. Historic data created by humans in a particular social context can, when used for training AI and ML, lead to biased results, as in the case of a sexist AI recruitment tool which Amazon scrapped shortly after its introduction.

An industry has developed around these questions, with vendors offering tools to scan for biases in data and illuminate data which can be proxies for race, such as postal codes, for example.

But for some, the problem of AI ethics runs deeper than merely the training data. A paper shared by former Google ethics expert Gebru on Twitter found that far from considering the wider societal impact of their work, a sample of 100 influential machine learning papers define and apply values supporting the centralisation of power.

"Finally, we find increasingly close ties between these highly cited papers and tech companies and elite universities," the paper said [PDF].

Speaking to The Register, paper co-author Ravit Dotan, a postdoctoral researcher at the Center for the Philosophy of Science at the University of Pittsburgh, said the point of the study was to see the value behind ML research and the researcher's motivations.

"Who is the target, the beneficiary? Is it people within the discipline or is it a broader community? Or is it Big Tech? We wanted to see how authors intend to satisfy [that target]. We also wanted to understand the funding structure better," she said.

The paper also looked at whether ML researchers considered the negative consequences of their work. The vast majority did not. "It was very rare to see any kind of work addressing of potential negative consequences, even in papers that you really would expect it, such as those looking at the manipulation of videos," Dotan said.

In a world where deepfake porn is prompting those whose likenesses have been stolen (mostly women) to fight for tighter regulation, the negative consequences of image manipulation seem all too evident.

In her interview with Bloomberg, Gebru also called for the regulation of AI companies. "Government agencies' jobs should be expanded to investigate and audit these companies, and there should be standards that have to be followed if you're going to use AI in high-stakes scenarios," she said.

But the UK's AI strategy is vague on regulation.

Although it acknowledges trends like deepfakes and AI-driven misinformation might be risks, it promises only to "publish a set of quantitative indicators... to provide transparency on our progress and to hold ourselves to account."

It promises that "the UK public sector will lead the way by setting an example for the safe and ethical deployment of AI through how it governs its own use of the technology."

It adds that the UK will "seek to engage early with countries on AI governance, to promote open society values and defend human rights.

"Having exited the EU, we have the opportunity to build on our world-leading regulatory regime by setting out a pro-innovation approach, one that drives prosperity and builds trust in the use of AI.

"We will consider what outcomes we want to achieve and how best to realise them, across existing regulators' remits and consider the role that standards, assurance, and international engagement plays."

And data protection regulation? Even murkier...

One existing regulator, the Information Commissioner's Office, is already engaged with proposed changes to data protection law following the UK's departure from the EU. The government review has provoked alarm as it proposes watering down individuals' rights to challenge decisions made about them by AI.

Meanwhile, the UK has published guidance on AI ethics in the public sector, developed by the Alan Turing Institute, an AI body formed by five leading UK universities. This was followed by the government's Ethics, Transparency and Accountability Framework for Automated Decision-Making, launched in May 2021.

Critics might argue that guidance and frameworks do not amount to law and remain untested. The government has promised to publish a White Paper – or policy document – on governing and regulating AI next year.

A government spokesperson sent us a statement after initially only wanting to brief The Reg on background:

"We are committed to ensuring AI is developed in a responsible way. We have published extensive guidance on how firms can use the technology ethically and transparently and issued guidance so workers in the field can report wrongdoing while retaining their employment protections. We are also going to publish a White Paper on governing and regulating AI as part of our new national AI Strategy."

In the launch of the AI Strategy, business secretary Kwasi Kwarteng described his desire to "supercharge our already admirable starting position" in AI. But it will take more than words to convince the wider world. Observers will want to see more openness in public-sector contracting and in the government's approach to AI ethics to back up the government's ambition. ®


Other stories you might like

  • Monero-mining botnet targets Windows, Linux web servers
    Sysrv-K malware infects unpatched tin, Microsoft warns

    The latest variant of the Sysrv botnet malware is menacing Windows and Linux systems with an expanded list of vulnerabilities to exploit, according to Microsoft.

    The strain, which Microsoft's Security Intelligence team calls Sysrv-K, scans the internet for web servers that have security holes, such as path traversal, remote file disclosure, and arbitrary file download bugs, that can be exploited to infect the machines.

    The vulnerabilities, all of which have patches available, include flaws in WordPress plugins such as the recently uncovered remote code execution hole in the Spring Cloud Gateway software tracked as CVE-2022-22947 that Uncle Sam's CISA warned of this week.

    Continue reading
  • Red Hat Kubernetes security report finds people are the problem
    Puny human brains baffled by K8s complexity, leading to blunder fears

    Kubernetes, despite being widely regarded as an important technology by IT leaders, continues to pose problems for those deploying it. And the problem, apparently, is us.

    The open source container orchestration software, being used or evaluated by 96 per cent of organizations surveyed [PDF] last year by the Cloud Native Computing Foundation, has a reputation for complexity.

    Witness the sarcasm: "Kubernetes is so easy to use that a company devoted solely to troubleshooting issues with it has raised $67 million," quipped Corey Quinn, chief cloud economist at IT consultancy The Duckbill Group, in a Twitter post on Monday referencing investment in a startup called Komodor. And the consequences of the software's complication can be seen in the difficulties reported by those using it.

    Continue reading
  • Infosys skips government meeting - and collecting government taxes
    Tax portal wobbles, again

    Services giant Infosys has had a difficult week, with one of its flagship projects wobbling and India's government continuing to pressure it over labor practices.

    The wobbly projext is India's portal for filing Goods and Services Tax returns. According to India’s Central Board of Indirect Taxes and Customs (CBIC), the IT services giant reported a “technical glitch” that meant auto-populated forms weren't ready for taxpayers. The company was directed to fix it and CBIC was faced with extending due dates for tax payments.

    Continue reading
  • Google keeps legacy G Suite alive and free for personal use
    Phew!

    Google has quietly dropped its demand that users of its free G Suite legacy edition cough up to continue enjoying custom email domains and cloudy productivity tools.

    This story starts in 2006 with the launch of “Google Apps for Your Domain”, a bundle of services that included email, a calendar, Google Talk, and a website building tool. Beta users were offered the service at no cost, complete with the ability to use a custom domain if users let Google handle their MX record.

    The service evolved over the years and added more services, and in 2020 Google rebranded its online productivity offering as “Workspace”. Beta users got most of the updated offerings at no cost.

    Continue reading
  • GNU Compiler Collection adds support for China's LoongArch CPU family
    MIPS...ish is on the march in the Middle Kingdom

    Version 12.1 of the GNU Compiler Collection (GCC) was released this month, and among its many changes is support for China's LoongArch processor architecture.

    The announcement of the release is here; the LoongArch port was accepted as recently as March.

    China's Academy of Sciences developed a family of MIPS-compatible microprocessors in the early 2000s. In 2010 the tech was spun out into a company callled Loongson Technology which today markets silicon under the brand "Godson". The company bills itself as working to develop technology that secures China and underpins its ability to innovate, a reflection of Beijing's believe that home-grown CPU architectures are critical to the nation's future.

    Continue reading
  • China’s COVID lockdowns bite e-commerce players
    CEO of e-tail market leader JD perhaps boldly points out wider economic impact of zero-virus stance

    The CEO of China’s top e-commerce company, JD, has pointed out the economic impact of China’s current COVID-19 lockdowns - and the news is not good.

    Speaking on the company’s Q1 2022 earnings call, JD Retail CEO Lei Xu said that the first two years of the COVID-19 pandemic had brought positive effects for many Chinese e-tailers as buyer behaviour shifted to online purchases.

    But Lei said the current lengthy and strict lockdowns in Shanghai and Beijing, plus shorter restrictions in other large cities, have started to bite all online businesses as well as their real-world counterparts.

    Continue reading
  • Foxconn forms JV to build chip fab in Malaysia
    Can't say when, where, nor price tag. Has promised 40k wafers a month at between 28nm and 40nm

    Taiwanese contract manufacturer to the stars Foxconn is to build a chip fabrication plant in Malaysia.

    The planned factory will emit 12-inch wafers, with process nodes ranging from 28 to 40nm, and will have a capacity of 40,000 wafers a month. By way of comparison, semiconductor-centric analyst house IC Insights rates global wafer capacity at 21 million a month, and Taiwanese TSMC’s four “gigafabs” can each crank out 250,000 wafers a month.

    In terms of production volume and technology, this Malaysian facility will not therefore catapult Foxconn into the ranks of leading chipmakers.

    Continue reading
  • NASA's InSight doomed as Mars dust coats solar panels
    The little lander that couldn't (any longer)

    The Martian InSight lander will no longer be able to function within months as dust continues to pile up on its solar panels, starving it of energy, NASA reported on Tuesday.

    Launched from Earth in 2018, the six-metre-wide machine's mission was sent to study the Red Planet below its surface. InSight is armed with a range of instruments, including a robotic arm, seismometer, and a soil temperature sensor. Astronomers figured the data would help them understand how the rocky cores of planets in the Solar System formed and evolved over time.

    "InSight has transformed our understanding of the interiors of rocky planets and set the stage for future missions," Lori Glaze, director of NASA's Planetary Science Division, said in a statement. "We can apply what we've learned about Mars' inner structure to Earth, the Moon, Venus, and even rocky planets in other solar systems."

    Continue reading
  • The ‘substantial contributions’ Intel has promised to boost RISC-V adoption
    With the benefit of maybe revitalizing the x86 giant’s foundry business

    Analysis Here's something that would have seemed outlandish only a few years ago: to help fuel Intel's future growth, the x86 giant has vowed to do what it can to make the open-source RISC-V ISA worthy of widespread adoption.

    In a presentation, an Intel representative shared some details of how the chipmaker plans to contribute to RISC-V as part of its bet that the instruction set architecture will fuel growth for its revitalized contract chip manufacturing business.

    While Intel invested in RISC-V chip designer SiFive in 2018, the semiconductor titan's intentions with RISC-V evolved last year when it revealed that the contract manufacturing business key to its comeback, Intel Foundry Services, would be willing to make chips compatible with x86, Arm, and RISC-V ISAs. The chipmaker then announced in February it joined RISC-V International, the ISA's governing body, and launched a $1 billion innovation fund that will support chip designers, including those making RISC-V components.

    Continue reading
  • FBI warns of North Korean cyberspies posing as foreign IT workers
    Looking for tech talent? Kim Jong-un's friendly freelancers, at your service

    Pay close attention to that resume before offering that work contract.

    The FBI, in a joint advisory with the US government Departments of State and Treasury, has warned that North Korea's cyberspies are posing as non-North-Korean IT workers to bag Western jobs to advance Kim Jong-un's nefarious pursuits.

    In guidance [PDF] issued this week, the Feds warned that these techies often use fake IDs and other documents to pose as non-North-Korean nationals to gain freelance employment in North America, Europe, and east Asia. Additionally, North Korean IT workers may accept foreign contracts and then outsource those projects to non-North-Korean folks.

    Continue reading
  • Elon Musk says Twitter buy 'cannot move forward' until spam stats spat settled
    A stunning surprise to no one in this Solar System

    Elon Musk said his bid to acquire and privatize Twitter "cannot move forward" until the social network proves its claim that fake bot accounts make up less than five per cent of all users.

    The world's richest meme lord formally launched efforts to take over Twitter last month after buying a 9.2 per cent stake in the biz. He declined an offer to join the board of directors, only to return asking if he could buy the social media platform outright at $54.20 per share. Twitter's board resisted Musk's plans at first, installing a "poison pill" to hamper a hostile takeover before accepting the deal, worth over $44 billion.

    But then it appears Musk spotted something in Twitter's latest filing to America's financial watchdog, the SEC. The paperwork asserted that "fewer than five percent" of Twitter's monetizable daily active users (mDAUs) in the first quarter of 2022 were fake or spammer accounts, which Musk objected to: he felt that figure should be a lot higher. He had earlier proclaimed that ridding Twitter of spam bots was a priority for him, post-takeover.

    Continue reading

Biting the hand that feeds IT © 1998–2022