This article is more than 1 year old

UK seeks light-touch AI legislation as industry leaders call for LLM pause

Steve Wozniak and Elon Musk find something to agree on

The UK has proposed a light touch approach to regulating AI, coinciding with the first reports of a suicide that is alleged to have followed a week's intensive conversation with a chatbot.

Department for Science, Innovation and Technology has launched a white paper consultation in preparation for developing legislation addressing risks inherent in deploying AI in society.

In the foreword to "A pro-innovation approach to AI regulation", science, innovation and technology minister Michelle Donelan said the risks associated with AI could "include anything from physical harm, an undermining of national security, as well as risks to mental health."

However, the British government has rejected the risk-based approach taken by the EU, instead advocating a framework designed to "ensure that regulatory measures are proportionate to context and outcomes."

Donelan said: "A heavy-handed and rigid approach can stifle innovation and slow AI adoption. That is why we set out a proportionate and pro-innovation regulatory framework. Rather than target specific technologies, it focuses on the context in which AI is deployed. This enables us to take a balanced approach to weighing up the benefits versus the potential risks."

The UK government's policy position is stacked against increasing alarm that developments in large language models such as GPT-4 are running ahead of any understanding of the risks.

Hundreds of computer scientists, industry leaders and AI experts have signed an open letter calling for a pause for at least six months to the training of AI systems more powerful than GPT-4.

Signatories include Apple co-founder Steve Wozniak, SpaceX, Tesla and Twitter CEO Elon Musk, New York University AI researcher and professor emeritus Gary Marcus, and Grady Booch, IEEE computing pioneer and IBM Fellow.

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter said.

“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the signatories said.

Others, however, were quick to point out that the LLM horse may have already bolted.

Nonetheless, a sober reminder of what might be at stake in regulating AI came with news from Belgium that a man had committed suicide, apparently following a number of weeks' conversation with an online chatbot based on the open-source GPT-J language model. Belgium's digital secretary said yesterday, after speaking with the family, that the situation was "a serious precedent that must be taken very seriously."

In the US, the National AI Initiative Act of 2020 became law on January 1, 2021. It is designed to provide "a coordinated program across the entire Federal government to accelerate AI research and application for the Nation’s economic prosperity and national security." ®

More about

TIP US OFF

Send us news


Other stories you might like