This article is more than 1 year old
Don't worry, folks. Big Tech pinky swears it'll build safe, trustworthy generative AI
White House bags more voluntary commitments
Eight big names in tech, including Nvidia, Palantir, and Adobe, have agreed to red team their AI applications before they're released and prioritize research that will make their systems more more trustworthy, the White House tells us.
Today, the Biden-Harris administration boasted it had secured voluntary commitments [PDF] from Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability AI to develop machine-learning software and models in a safe, secure, and trustworthy way. The commitments only cover future generative AI models.
Adobe and Stability AI build text-to-image products, while Cohere, IBM, Salesforce, and Nvidia offer large language models to enterprises. Meanwhile, Palantir and Scale AI are both US government contractors developing and integrating models for the military, which just about covers the gamut of hypothetical Black Mirror episodes.
"The President has been clear: harness the benefits of AI, manage the risks, and move fast – very fast. And we are doing just that by partnering with the private sector and pulling every lever we have to get this done," chief of staff Jeff Zients said in a statement to the media.
Each of the aforementioned corporations has promised to submit their software to internal and external audits, where independent experts can attack the models to see how they can be misused. The White House is most concerned about AI generating information that could help people make biochemical weapons or exploit cybersecurity flaws, and whether the software can be hooked up to automatically control physical systems or self-replicate.
To tackle these risks, the organizations agreed to safeguard their intellectual property and make sure things like the weights of their proprietary neural networks don't leak – thus preventing the tech from falling into the wrong hands – while giving users a way to easily report vulnerabilities or bugs. They also said they would publicly report their technology's capabilities and limits, including fairness and biases, and define inappropriate use cases that are prohibited.
On top of all this, all eight companies agreed to focus on research to investigate societal and civil risks AI might pose if they lead to discriminatory decision-making or have weaknesses in data privacy. Generative AI is prone to producing false information that could be manipulated to spread misinformation.
One way to make these agreements more concrete? The US government wants Big Tech to develop watermarking techniques that can identify AI-generated content. Last month, the smart cookies at Google DeepMind announced SynthID, a tool that subtly alters the pixels of a picture generated by its model Imagen to signal it is a synthetic image.
- Fear not, White House chatted to OpenAI and pals, and they promised to make AI safe
- Friendly AI chatbots will be designing bioweapons for criminals 'within years'
- Get ready for Team America: AI Police
- DEF CON to set thousands of hackers loose on LLMs
Finally, the US has asked the corps to commit to building models for good, such as fighting climate change or improving healthcare. In July, seven top AI companies and startups, including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI also agreed to the same commitments. Again, the to-do list is all voluntary, meaning the companies won't really get into any trouble if they go against their word.
That said, it's in those organizations' interests to follow through with the promises: they should end up making better software that people are willing to trust and pay for, and they get to bake in some early protection against the more murkier uses of today's artificial intelligence. For instance, say what you want about Microsoft, it's not in the Azure giant's interest to have its tech be used to create deepfake pornography on an industrial scale.
Meanwhile, the White House gets to say it's working toward developing real regulation, and that these undertakings are a first step in that direction.
"These commitments represent an important bridge to government action, and are just one part of the Biden-Harris administration's comprehensive approach to seizing the promise and managing the risks of AI. The administration is developing an Executive Order and will continue to pursue bipartisan legislation to help America lead the way in responsible AI development," the White House said in a canned statement. ®