The privacy threats presented by artificial intelligence (AI) are very real, and women and children are particularly vulnerable, and with generative AI, the potential good does not outweigh the already-existing bad.
In January 2024, explicit AI-generated images of American pop star Taylor Swift began circulating rapidly on X (formerly Twitter). The images were traced in part back to a 4chan message board, which saw users exploiting AI image generators as part of a daily challenge.
As reported in the New York Times, research from social media analytics firm, Graphika, has shown that users of the famously unmoderated site have shared techniques to circumvent existing safeguards and avoid bans when looking to generate content. This content is reportedly becoming increasingly sexually explicit and represents not only an extremely significant violation of privacy and interpersonal decency but also a serious threat to mental and physical health.
This kind of harassment is not new
Digital deepfakes have existed for a long time now, and both celebrities and non-celebrities have been impacted. Print ‘deepfakes’ (if one could use such a term) have existed for even longer; Marie Antoinette was the victim of a campaign of such published harassment in the leadup to the French Revolution, for example.
Taylor Swift has not been the only victim of such abhorrent behaviour but is the most prominent figure to suffer from it recently. Also in January, it was reported that a London schoolgirl killed herself after boys at her school pasted her face from social media onto explicit images as part of a longstanding saga of relentless bullying. In December 2023, AI-generated explicit photos of students at a school in Winnipeg, Canada, were shared and brought to the attention of school officials, though no criminal charges have been laid, exposing a serious gap in Canadian law that is shared by many other countries.
The reason for highlighting such cases—just two of a shocking multitude of them—is not to detail them but to raise alarm bells.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataEven at its relatively undeveloped stage, generative AI is causing serious harm with effectively zero repercussions for the perpetrators. Short of panicking, this should be a cause for the most serious concern. This issue, for want of a better word (even calling it an ‘issue’ sounds lessening), raises very sobering questions about our priorities as a society, how we regard women and girls, the purpose of generative AI, and our willingness to regulate tech companies.
The focus is on geopolitics not generative AI
OpenAI, the company responsible for ChatGPT, announced Sora earlier in February 2024 and already there are concerns. While many have shared their excitement and praise for the text-to-video software as a big step in AI advancement, others have expressed worry.
The security risks posed by this model are very serious, especially within a landscape of misinformation and disinformation amid the numerous elections in 2024; most notably the US presidential election, but also India, the UK, and Russia. And while these concerns should be in no way diminished or devalued, there are some very pressing concerns for personal safety to consider too.
Of the top 100 articles shown by a Google search of “sora openai” (as of February 20, 2024), not even one mentioned the risks posed to women and girls by this software.
That all 100 of these articles spoke about what Sora is, what has been demonstrated, why it is exciting, and why elections might be badly affected by such models is not anybody’s fault necessarily. It certainly is not OpenAI’s fault that tech journalists have written about tech.
However, this kind of journalism feels one-dimensional, devoid of any real analysis of the wider implications of Sora. After all, saying that a model that can produce fake videos from a text prompt might be used to produce a fake video from a political text prompt is not exactly a big leap of the imagination.
On social media, in contrast, users have spoken again about deleting their social media to prevent what has happened to figures like Taylor Swift from happening to them.
TikTok user @allyrooker expressed her horror, saying that she is “losing [her] mind over how many women’s lives are going to be ruined by this AI video b*******.” It is perfectly reasonable for a tech journalism outlet, for example, to write about the concerns that something like Sora raises about the safety of women and girls, and yet none of the top publications have chosen to do so.
The bigger, more important conversation about generative AI
Regarding generative AI and tech companies, resolving these issues should be an ethical priority, though this is not to say that any action would be easy. Aside from the expected pushback from the tech industry, the specifics of any piece of legislation would be difficult to establish and would end up, I suspect, rather watered down.
However, the purpose here is not to set out what those specifics might be but to express a desired end goal. This end goal should be to eradicate the possibility of explicit content of non-consenting parties being generated using AI, or indeed edited using existing non-AI software like Photoshop (as in the case of more ‘traditional’ deepfakes).
For balance’s sake, there is scope for AI-generated explicit content; there is an entire adult industry. If a content creator chooses to create content of themselves or other consenting parties, then that could be their prerogative. The keys are, and must always be, consent and agency.
The lack of effective legislation, the decision not to report on this by many media outlets—deliberate or otherwise—and societal attitudes towards consent and agency are related and are at the very center of this issue.
In many ways, it does not matter how stringently a government might regulate tech companies. Even if, for example, all governments globally were to hold search engine platforms accountable for the material that they host—which is in no way advisable—this would not fundamentally alter the underlying reason why these explicit images are created in the first place. They are created to humiliate the target, to make them fearful, to damage them, to—in the minds of a perpetrator—devalue them.
This behaviour comes from a place of a deep, perhaps hateful disrespect for the agency of women and girls as human beings of equal value, of a total disregard for women and girls as anything other than sexual objects that can be abstracted and terrorized. Resolving this requires a profound conversation that gets to the heart of society and a willingness to recognize fault where it lies. Some might prefer thinking of this process in terms of demolishing patriarchy, others might prefer it in terms of achieving equality or of preventing harm.
However one wants to think about this process, we must recognize it as crucial. No amount of tech regulation can fix this, but it is still very necessary, and it would set the stage for a more accountable society and technology environment. Regulating the capabilities and source material available to generative AI models as we should have done several years ago should be a high priority.
Generative AI capabilities are remarkable achievements of humankind. However, ultimately, the question of their utility must be raised. How useful, how much good can such capabilities actually achieve, and how does that weigh against the disgusting behaviour that it can facilitate?