This article is more than 1 year old

UK won't rush to regulate AI, says first-ever minister for digital brainboxes

One does not wish to slow innovation, Viscount Jonathan Camrose opines

The UK government will not rush to pass new laws that regulate AI, to avoid hampering innovation and potential financial growth, the minister for AI and intellectual property Jonathan Camrose said this week.

In other words, Britain hopes to attract machine-learning talent and business by offering a relatively lax regulatory environment for startups and Big Tech to play in.

Jonathan Berry, 5th Viscount Camrose to give him his full title, is a member of the House of Lords and the country's first minister to hold the portfolio, created by Prime Minister Rishi Sunak in March 2023. He confirmed the government is taking a hands-off approach to regulating AI, at least "in the short term."

His comments come weeks after the UK hosted the global AI Summit, a conference at which national leaders plus top tech executives discussed the impact and safety risks of modern neural networks. Unlike some of the attendees, like the EU or China, for example, the UK decided not to address potential AI dangers with strict legislation. Instead, it prefers the "pro-innovation" approach.

"I would never criticize any other nation's act on this," Camrose said during a Financial Times conference. "But there is always a risk of premature regulation. Scrambling to regulate AI would limit the technology, he argued.

"You are not actually making anybody as safe as it sounds", he added. "You are stifling innovation, and innovation is a very, very important part of the AI equation."

Britain's decision to not introduce legislation to regulate AI isn't surprising, as in August the nation’s Department for Science, Innovation and Technology and Office for Artificial Intelligence published a white paper that stressed how artificial intelligence was vital for the country's economic growth.

"To ensure we become an AI superpower, though, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments," Michelle Donelan, Secretary of State for Science, Innovation and Technology, previously wrote in a March preview of the white paper.

Donelan declared herself opposed to a "heavy-handed and rigid approach", which could "stifle innovation and slow AI adoption", and said the government preferred an approach that "relies on collaboration between government, regulators and business.”

That approach aligns with Prime Minister Sunak's recent announcements that Google DeepMind, OpenAI, and Anthropic had reportedly committed to giving "early or priority access" of their models so that the government could probe their capabilities and safety risks. They have also entered into similar voluntary agreements with the US government.

The White House is also yet to push for laws or rules that regulate AI. That said, President Biden urged Congress to pass bipartisan legislation "to help America lead the way in responsible innovation."

Experts within academia and industry are torn over AI regulation. British computer scientist and former Google researcher Geoff Hinton, for example, has spoken out about the potential dangers of machines becoming smarter than humans at writing code that could create negative outcomes. Others, like Meta's AI head Yann LeCun, believe arguments about AI posing an existential threat are overblown and is wary that regulation would hinder open-source efforts to study and build AI.

"Like many, I very much support open AI platforms because I believe in a combination of forces: people's creativity, democracy, market forces, and product regulations. I also know that producing AI systems that are safe and under our control is possible. I've made concrete proposals to that effect. This will all drive people to do the Right Thing," LeCun said. ®

More about

TIP US OFF

Send us news


Other stories you might like