This article is more than 1 year old
AI needs a regulatory ecoystem like your car, not a Czar
No single entity tells you to stay in your lane, why should brainboxes be any different?
The world will need an ecosystem of interlocking AI regulators, pundits argued at the ATxSG conference in Singapore today.
Across several sessions, the advent of generative AI was hailed by many at the conference as a defining moment in human history, on par widespread use with fossil fuels or the industrial revolution.
The emerging technology naturally had its own segment at the summit, with much discussion over regulation. As government and industry engaged in a robust back and forth, two themes emerged: comparisons of generative AI regulation with laws applied to motor automobile laws, and lawyers should not use ChatGPT to write case briefs.
Last week, a lawyer did exactly that, complete with citations of six past court decisions – on cases that didn't exist.
According to Kat Frith Butterfield, executive director of the World Economic Forum-associated Centre for Trustworthy Technology, that lawyer even asked ChatGPT to verify that the cases cited were real.
Almost every panel The Reg attended involved experts having a chuckle at the lawyer's folly.
But it also left a lot of people thinking: is the end user at fault for generative AI's hallucinations - the polite term used to describe mistakes made by the digital brainboxes.
Global AI Ethics and Regulatory Leader at EY Dr Ansgar Koene expressed a dislike for the term on grounds it anthropomorphizes machine-made errors. Applying a term for human behaviour to a computer's output can lead people to misunderstand what exactly is happening, Koene argued.
"Generative AI is not operating in an altered state, but rather how it is supposed to," said Koene. He then posed the question: "How do we shape government around generative AI if we don't know how it's used?"
And that in turn prompts another question: Who is at fault in this scenario?
"Doctors and lawyers are using these tools," suggested Butterfield. "It's even more important when non-experts are using these tools that there is accountability, non-liability. If you aren't an expert, where do you go to check the tool? The onus should be on those creating it."
And with that thought process, almost every panel The Reg attended ended up leaning into the car metaphor – and not just because cars and AI are fun to drive, expensive to run, and dangerous after ten beers. Rather, AI likely needs regulating in the same way society has regulated the automobile industry.
- Top AI execs tell US Senate: Please, please pour that regulation down on us
- Here's how the data we feed AI determines the results
- Netherlands digital minister smacks down Big Tech over AI regs
- Healthcare org with over 100 clinics uses OpenAI's GPT-4 to write medical records
"Whenever you produce a new car, you first want to make sure that it's safe to drive before you allow it to be in the streets," said Netherlands' minister for digitalization, Alexandra van Huffelen.
But it's not only manufacturers who are required to conduct rigorous tests and follow standards. Drivers need licenses and must register their vehicle, which must undergo emissions testing. Roads are built to certain standards, and traffic flows are regulated with traffic lights, signs, and speed limits. Parking is only permitted in certain places.
Butterfield even pondered out loud on Tuesday whether insurance to mitigate risks of generative AI would ever become the norm.
"It's not just about the cars and the roads, but also the city and the way we build the city matters," summed up Koene.
Generative AI frameworks also must consider how they are advertised, how data is managed with consent and secured, and how it is affected by copyright law – the list goes on.
The inevitable conclusion is that any resulting regulatory framework will be an ecosystem, not a top-down effort bossed by an AI cop. And for a technology that is less than a year old, regulation is coming in fast.
"I think the level of international cooperation between US and its allies in Europe and elsewhere is actually phenomenal in the area of AI policy, and there's a lot of harmonization effort going on," said Nvidia vice president Keith Strier – a man whose employer, a maker of AI hardware, has many reasons to push back on AI regulation.
But even as he pushed back against government, he suggested the tech could be regulated by professional standards, social norms that define boundaries, and education – reminiscent of a "Big Society" approach.
"This technology has been in the marketplace for six months, I've never seen this much activity," Strier said, downplaying the need for urgent action.
But at least this week in Singapore, that opinion – that generative AI may be getting more attention than it needs – seemed to be in the minority. ®