An executive at artificial intelligence (AI) startup Stability AI resigned last week after claiming he did not believe it was legal to develop AI models with the use of copyrighted content.
The departure comes as the company, and several other AI companies, face lawsuits from artists and others who claim their work has been unfairly used in the development of generative AI (GenAI) systems.
GenAI is trained by scraping large swaths of data use to train large language models (LLMs) to generate content.
Stability AI, known mostly for its AI-powered image tool Stable Diffusion, claims all of its practices are protected under “fair use” in US law.
Stock image giant Getty Images launched a lawsuit against Stable Diffusion in February, alleging the company used 12 million images to train its AI model “without permission … or compensation.”
Ed Newton-Rex, former vice president of audio at Stability AI, wrote in a registration statement that “exploiting creators can’t be the long-term solution” for GenAI.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataIn a tweet on X, formerly Twitter, on Wednesday, Newton-Rex wrote: “To be clear, I’m a supporter of GenAI. It will have many benefits — that’s why I’ve worked on it for 13 years.
“But I can only support GenAI that doesn’t exploit creators by training models — which may replace them — on their work without permission. I’m sure I’m not the only person inside these GenAI companies who doesn’t think the claim of ‘fair use’ is fair to creators.”
He added that he hoped others in the industry would speak up so that companies realise that exploiting creators can’t be the long-term solution in GenAI.
The use of copyrighted material in the training of GenAI has been a growing issue, especially for businesses looking to integrate the technology.
“Many image generators, when asked to generate work in the style of a specific artist, include a variant of the artist’s watermark, showing that it’s directly pulling their work in order to generate these new images – without permission,” Justin Gould, head of performance marketing at internet service provider Fasthosts, told Verdict.
Gould believes there should be an opt-in system where “artists, writers, and anyone else who has creative works online can expressly state that they’re happy for their work to be used to train these AI models”.
However, he states there is “very few people out there who would actually be happy to give their work to a tool whose ultimate goal is to replace them, so I’d prepare for plenty more lawsuits to be introduced in the near future.”
Lydia Dettling, policy manager at Access Partnership, said that businesses should be using GenAI tools that are built strictly off legally obtained and licensed content.
“I would urge businesses to use these and avoid the risks that may arise from models that scrape the internet for data, and are not transparent about their sources,” Dettling told Verdict.
“Moving forward, the pending EU AI Act will hopefully mandate GenAI tool providers to publicly disclose their training data, which will provide a little more clarity and assurance for users,” she added.