When OpenAI initially reached out to Scarlett Johansson to use her voice for their AI assistant, the actress declined.
However, the updated version of ChatGPT, which can act like an AI voice assistant, sounded suspiciously like the actress. The actress sought legal counsel, but it is as of yet unclear if she will pursue this further as the tech company has since withdrawn the contentious voice from its AI system.
OpenAI and Big Tech arrogance
The public legal spat between Scarlett Johansson and OpenAI shows the arrogant ethos among the biggest players in the tech industry, the belief that they are above reproach and consequence. While Scarlet Johansson, as a Hollywood A-Lister, has the platform and financial means to fight OpenAI, not everyone does. The tech industry needs to not only be better regulated but needs to have those regulations properly enforced when necessary.
GlobalData’s AI Governance Framework lists 15 risk factors to consider when designing, developing, and deploying AI, one of which is intellectual property infringement. Using Generative AI to create images, audio, and video has given those who work in media cause for concern. This is especially true given the rapid pace of development when it comes to generative AI.
The Scarlett Johansson-OpenAI saga only serves to compound existing fears among those in the media that AI will have a detrimental impact on their industry. SAG-AFTRA, whose members went on strike in 2023 over issues including AI, called for a person’s image, voice, and likeness to be enshrined as an intellectual property right at the federal level in the US. As Generative AI (part of the AI value chain under creation AI) becomes more prevalent, there will need to be clear legal measures to protect content creators’ rights to their image and body of work. For this reason, SAG-AFTRA is backing the bipartisan ‘No Fakes Bill’ which seeks to protect performers from unauthorised digital replicas.
It will be difficult to effectively regulate AI, even with legal protections
An accusation often levied at AI systems is the lack of transparency around how they are trained. In the Johansson case, OpenAI would argue that it did not use her voice deliberately, and the fact that it sounds almost exactly like her is pure coincidence. And even if it could be proved that OpenAI explicitly used her voice, how effective will legal protections be for intellectual property when it comes to AI? If precedent is anything to go by, then it will not be very effective, especially when it comes to disciplining Big Tech.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAnother AI risk factor cited in GlobalData’s Framework is data privacy. This area is protected under robust legal structures, the main one being GDPR. GDPR levies heavy fines for companies that are not in compliance with data privacy regulations. The trouble is that Big Tech can afford to pay up.
Rather than being a punishment to be feared, it feels more like a slap on the wrist. It has a bigger impact on smaller companies, for whom a fine worth millions would put a large dent in their profits. The rate of AI development will not slow down anytime soon, nor will Big Tech become less arrogant. It is the job of regulators to ensure that the industry is properly held to account.
The punishment for AI companies found to violate regulation should hurt, both legally and financially. Protecting the intellectual property of content creators would be a good signal to the world that Big Tech giants can no longer act with impunity.