This article is more than 1 year old
Adobe: Take user data to train generative AI models? We'd never do that
Controversial policy change forces chief product officer to speak out
Adobe chief product officer Scott Belsky has responded to criticism of the company's content analysis policies by saying it has never used customers' creations to train generative AI models.
Artists were furious to learn Adobe could automatically analyze their audio, video or text documents stored on its cloud servers to develop and improve its AI products and services unless they opted out earlier this month. Fears that their work would be used to train generative text-to-image models capable of copying their style sparked an outcry.
"We have never, ever used anything in our storage to train a generative AI model," Belsky insisted in an interview with Bloomberg this week. "Not once."
Belsky said the content analysis policy was geared towards improving existing features for its graphics software rather than for developing new AI image generation tools. Adobe is now planning to update its rules, and promised to be transparent about whether user data would be used to train these types of models if it decided to build them in the future.
"We are rolling out a new evolution of this policy that is more specific," Belsky said. "If we ever allow people to opt in for generative AI specifically, we need to call it out and explain how we're using it."
- Publisher breaks news by using bots to write inaccurate stories
- Will 2023 be the year of dynamite disinfo deepfakes, cooked up by rogue states?
- Midjourney, DeviantArt face lawsuit over AI-made art
- Scientists tricked into believing fake abstracts written by ChatGPT were real
Generative AI systems have ignited a battle over copyright. These tools learn to associate images with words, and produce digital art given text descriptions. Trained on huge amounts of labeled data, these systems can reproduce people's styles accurately. A class action lawsuit filed by three artists aimed at Stability AI, Deviant Art and Midjourney filed recently claims they have infringed artists' copyright by scraping their work without permission to train text-to-image models.
But these companies believe they have repurposed people's images to create something new and it's fair use of the data. "Please note that we take these matters seriously. Anyone that believes that this isn't fair use does not understand the technology and misunderstands the law," a spokesperson for Stability AI previously told The Register.
Adobe appears to have updated its content analysis policy in August last year, and said it does not access content stored locally on user's devices. Belsky said people's concerns over copyright infringement were a "wake-up call" for the company and said it would "have to be very explicit" in clarifying how it was using people's data. ®