OpenAI insiders demand the right to blow whistle without fear of retaliation
'Current and former employees should retain their freedom to report their concerns to the public' open letter says
Current and former Google DeepMind and OpenAI staff have signed an open letter calling for support and protection for whistleblowers and accountability among companies at the leading edge of AI development.
Among the former OpenAI employees who have signed the letter is researcher Daniel Kokotajlo.
While not mentioned by name in the missive, OpenAI has faced criticism in recent weeks, both as a result of allegations regarding punishments for ex-employees who speak ill of the company and for the CEO not being entirely open with its own board.
In late May, Helen Toner, a former OpenAI board member, accused CEO Sam Altman of lying. Shortly before, it was revealed that OpenAI had inserted a non-disparagement clause into its exit agreements. This meant that former employees who criticized the company could face losing the vested equity earned during their tenure.
The letter warns: "AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.
"However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily."
Without much effective government oversight into what is happening within the walls of AI companies and considering the pace of development, employees are one of the few groups that can shine a light on the inner workings of firms such as OpenAI.
However, draconian confidentiality agreements have ensured that concerns cannot be voiced.
- 'Building AI co-workers going to be largest opportunity of tech in our lifetime'
- Millions forced to use brain as OpenAI's ChatGPT takes morning off
- Checkmate? AI's pawn-pushing prowess proves partly pitiful, partly promising
- OpenAI is very smug after thwarting five ineffective AI covert influence ops
The letter says: "Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated.
"Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues."
Therefore, it is unsurprising that the four principles the open letter calls for are all concerned with permitting current and former employees to speak out without fear of retaliation.
Confidentiality agreements are nothing new in the industry, although OpenAI's appeared particularly stringent. However, the pace of AI development means that it might be too late for concerns to be raised by the time regulators have caught up.
As the letter says: "Once an adequate process for anonymously raising concerns to the company's board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially.
"However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public."
An OpenAI spokesperson told The Register: "We're proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.
"This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company."
The company recently released all employees from the contentious non-disparagement agreements and removed the clause from its departure paperwork. However, it also noted that a balance needs to be struck between former employees freely expressing their views and the security and confidentiality of the technology being built. ®