OpenAI calls for tough regulation of AI while quietly seeking less of it
Oh no, over there, look, the killer robots, OMG here they come – just pay no attention to us shaping the rules
OpenAI – the maker of GPT-4, and other generative AI models – has been publicly calling for more AI regulation while privately seeking less of it.
A lobbying document sent to EU lawmakers titled "OpenAI White Paper on the European Union’s Artificial Intelligence Act" makes the case that OpenAI's large foundational models should not be considered high-risk.
The white paper, dating back to September 2022 and obtained by Time, suggests several amendments that reportedly have been incorporated into the the draft text of the EU AI Act, which was approved a week ago. The regulatory language will be the subject of further negotiations and possible changes prior to final approval, which could happen within six months.
OpenAI seemed most concerned about sections of the EU law regarding when AIs are classified as high-risk.
The lobbying paper states: "The new language in Annex III 1.8.a could inadvertently require us to consider both GPT-3 and DALL-E to be inherently high-risk systems since they are theoretically capable of generating content within the scope of the clause."
It argues that, rather than adding these restrictive clauses, another part of the law, Article 52, would suffice to ensure that AI providers take reasonable steps to mitigate disinformation and deep fakes without applying the "high-risk" classification.
Under the draft rules, AI systems are considered an "unacceptable risk" if they: deploy harmful manipulative 'subliminal techniques'; exploit specific vulnerable groups (physical or mental disability); are used by or on behalf of public authorities for social scoring purposes; and provide real-time remote biometric identification systems in publicly accessible spaces for law enforcement, except under specific circumstances.
Title III (Article 6) of the proposed AI act covers "high-risk" AI systems that could adversely affect people's safety or fundamental rights.
Such systems, subject to transparency and compliance requirements, include those that facilitate: biometric and human categorization systems; management and operation of critical infrastructure; educational and vocational training; employment, worker management, and access to self-employment; access to essential private and public services and benefits; law enforcement; migration, asylum, and border control management; and the administration of justice and democratic processes.
Essentially, the lobbying paper argues that the "high-risk" designation should be applied to fewer uses of AI models. And that message differs from calls by OpenAI officials and others for government oversight – which smaller rivals fear would be unfair.
In effect, the likes of OpenAI want government regulation, but want to steer that regulation so that their machine learning technologies do not fall into a category that will attract tougher rules and more red tape. In public, they call for more rules and safeguards for all, while in private they argue for a lighter touch.
- AI weapons need a safe back door for human control
- UN boss recommends nuclear option for AI regulation
- Microsoft picks perfect time to dump its AI ethics team
- AI is going to eat itself: Experiment shows people training bots are using bots
OpenAI CEO Sam Altman and co-founders Greg Brockman and Ilya Sutskever published a blog post earlier this year warning that superintelligent AI could be realized within a decade.
The AI Now Institute – a policy advocacy group – responded to the publication of the OpenAI paper by calling for a closer examination of industry lobbying.
"We need to scrutinize industry posturing on regulating AI as firms lobby to water down rules," the group wrote, adding that general purpose AI carries inherent risks and has already caused harm.
Regulatory capture by big industry players is a known problem in many sectors. OpenAI's hitherto unseen lobbying shows it's also trying to write its own rules. Its professed interest in regulation is attracting more questions than whether the outfit is squeezing smaller rivals, and whether content creators deserve to be compensated for the massive unpaid harvesting of online content required to train its models.
It would be no surprise to see the developer openly urge lawmakers to pass legislation cracking down on machine learning systems, only for the fine print of those rules to include carefully crafted clauses that allow OpenAI to operate unfettered.
OpenAI declined to comment. ®