Getty Images, The Associated Press and News Media Alliance are among the ten signees of an open letter calling to regulate the use of AI in journalism.
The letter, entitled Preserving public trust in media through unified AI regulation and practices, states that AI has the potential to “threaten the sustainability of the media ecosystem” by significantly corroding readers’ trust in the quality and truthfulness of news writing.
Reaffirming the media sector’s readiness to embrace new technologies, from the printing press to the internet and social media, the letter also states that the “pace of development and adoption of AI” vastly overtakes previous technological leaps.
The letter calls for collective negotiations and transparency between media outlets and AI developers to control what copyrighted material is used in training AI tools, as well as eliminating bias within AI algorithms and generated content.
The letter also details the “cost of inaction” if AI generated misinformation is spread on mass.
“Large language models make it possible for any actor,” the letter reads, “to produce and distribute synthetic content at a scale that far exceeds our past experience.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataNot only has evidence emerged that AI generated content is increasingly seeping into online news sites, a recent MIT study found that people were far more likely to believe AI generated misinformation than false news written by a human.
Principal Analyst at GlobalData, Laura Petrone explained that the implications of using generative AI in the media ecosystem are “profound” for the quality of writing and the future of the profession.
OpenAI have already faced multiple copyright lawsuits from authors claiming that their works were used to train its ChatGPT without consent.
“So it’s no surprise that news associations are coming together to express their concerns about the spread of AI generated content in the media and the lack of guardrails” Petrone adds.
However, OpenAI has also signed a deal with the Associated Press to allow the company to train its AI on archival news stories.
The open letter itself does state that AI can potentially provide “some significant benefits to humanity” when used correctly.
“The traditional media sector has been affected by sluggish growth for many years,” Petrone elucidates, “so more and more media corporations will likely bet on this technology to drive profits.”
The Guardian reported back in May that close to 50 online news sites were almost completely AI generated.
With AI generated content already becoming ubiquitous online, Senior Analyst Maya Sherman explains the difficulties in creating coherent regulation on AI generated content.
“Constant reporting on the emergence of online disinformation campaigns and bots has eroded public trust in the news system and its creators,” Sherman explained.
Meaning that LLMs enabling content generation on mass “highlights the risk of content abuse for malign purposes” she states.
Regulating this online content therefore requires a “complex enforcement mechanism” which combines combing algorithmic methods, regulators and human content moderators.
“With the commercial expansion of generative AI,” she concluded, “content moderation will have to address detailed scenarios and nuances to be effective in enforcement, especially due to the difficulty of eliminating data biases, without censoring content online.”