AI copyright row deepens: Stability VP quits in protest over 'fair use' excuse
Refuses to sing from the corporate songbook on the legal grounds for training neural nets on people's work
The VP of audio at Stability AI has decided his position at the content-generating startup is untenable, given his belief in protecting artists' copyrights and his now-former employer's stance that training machine-learning models on copyrighted material is legally OK.
"I've resigned from my role leading the Audio team at Stability AI, because I don't agree with the company's opinion that training generative AI models on copyrighted works is 'fair use'," former veep Ed Newton-Rex wrote on social media.
Operations like Stability, which are developing AI systems capable of generating synthetic content from natural-language conversations with users, often train their models on huge amounts of information scraped from the internet. Works under copyright are inevitably swept up into training data collections.
Judge bins AI copyright lawsuit against DeviantArt, Midjourney – Stability still in the mixCASE STUDY
Having more data improves these models' abilities to produce content across different themes and styles, but artists and legal wonks assert the neural networks, having been tanked up on human creativity, also unfairly rip off people's intellectual property by mimicking or closely copying those humans' artwork, writing, music, code, etc.
A few artists, writers, and comedians have gone as far as to sue AI startups, accusing the upstarts of violating copyright laws. Creators complain they are losing out on sales and royalties because the trained models can emit content similar or identical to their work on demand for netizens.
However, AI outfits usually argue the generated output isn't breaking any law, and that training on copyrighted data is legally fine since it can be considered fair use.
New works based on copyrighted materials may not violate current laws if they are considered transformative. According to the US Copyright Office, "Transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work."
Newton-Rex, however, isn't keen on that interpretation.
"I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is 'the effect of the use upon the potential market for or value of the copyrighted work'. Today's generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don't see how using copyrighted works to train generative AI models of this nature can be considered fair use," he wrote.
Others, like the Authors Guild, hold the same opinion, and have called on developers to compensate writers for using their novels, articles, essays, and poetry.
- Microsoft to shield paid-up Copilot customers from any AI copyright brawls it starts
- FTC interrupts Copyright Office probe to flip out over potential AI fraud, abuse
- Music publishers sue Anthropic for using copyrighted lyrics
- Canva creates $200M kitty to pay creators for stuff they feed its design-bot
Newton-Rex's post reveals he unfortunately couldn't change other Stability executives' minds on this topic, and pointed to the developer's 23-page submission to a US Copyright Office consultation on AI and copyright that opened: "We believe that AI development is an acceptable, transformative, and socially beneficial use of existing content that is protected by fair use."
The former veep thinks that's just wrong.
"Companies worth billions of dollars are, without permission, training generative AI models on creators' works, which are then being used to create new content that in many cases can compete with the original works. I don't see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright," he argued.
Is the content produced by generative AI models transformative? Is the technology protected by fair use? Should developers pay people for helping create the data they used to train their models?
These issues remain unresolved as litigation continues. Officials from Microsoft and Meta dodged that last question this week during a hearing held by the UK's House of Lords. The threat of more lawsuits, however, may have caused AI businesses to think twice about using copyrighted data in some cases.
OpenAI, for example, has struck deals with photo platform Shutterstock and news agency Associated Press to access their archives. Meanwhile, Google is reportedly attempting to negotiate with Universal Music Group to license music.
The Register has asked Newton-Rex and Stability AI for further comment. ®