The ‘dead internet theory’ is an online conspiracy theory that claims the internet as a platform created for humans by humans has ceased to exist and now mainly consists of content automatically generated, curated, and shared by AI-powered bots.

Some proponents of the theory even believe that the surge in automatically generated content is an intentional government plot to control public opinion.

While the theory is a little outlandish (to say the least), there is a discussion to be had about the undeniable increase of bots online. Cybersecurity company Imperva estimates that just under half of all internet traffic in 2023 was generated by bots.

Assuming this will likely increase as AI models become more accessible to the general public, what are the consequences of a ‘dead’ internet?

What are bots and why are they here?

A bot is an automated software application that performs repetitive tasks over a network. On the internet, bots can be used to download files, buy tickets to popular events, spam email addresses with promotional content, and search the internet for desired data points.

Their most noticeable iteration is on social media platforms, where bots create fake accounts that generate and share messages, images, and videos. These bots have been at the forefront of concerns over the spread of disinformation as they can be used to infiltrate groups of people and propagate specific ideas with little oversight.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Today, it is not even uncommon to see bots interact with one another on websites like Quora, Facebook, and X (formerly Twitter).

Initially, social media platforms were not incentivised to address the growing prominence of bots, as more accounts and more engagement allowed platforms to charge higher premiums for advertisements.

Big Tech was only really pushed to address the issue when advertisers realised they were overpaying to advertise to largely fake audiences and threatened to leave the platforms, followed by government bodies suggesting more stringent regulations to address the spread of online misinformation. Since October 2017, Facebook has deleted more than 27 billion fake accounts and started to apply “Made with AI” labels to necessary content on Facebook, Instagram, and Threads in May 2024.

What is the harm?

By no means should we underestimate the human capability to identify AI bots. The young ‘critically online’ generations are acutely aware of what is real and what is fake. As we are exposed to more AI content, our brains become more perceptive to little details or quirks that differentiate real images and conversations from AI-generated ones.

That is, however, not necessarily the case for the older generations, as seen by interactions under fake accounts whose posts are solely compromised of strange (albeit at times funny) AI-generated images of attractive female doctors, nurses, pilots, and soldiers begging for people to wish them a happy birthday.

The comment section of these posts is almost always a mix of bots and older gentlemen granting the ladies’ wishes and declaring the happiest of birthdays.

As AI models become more sophisticated, our ability to distinguish what is real from fake will diminish, and these interactions will become less funny and more concerning. What happens in an online environment where bot-to-bot interactions become the overwhelming majority of interactions?

The rise of bot-to-bot interactions is certainly shaping how humans use social media. For the most part, this is not being done intentionally by any organisation to generate an affinity towards a certain belief system (as some conspiracy theorists like to believe).

Organically, it is making people more distrustful of both real and fake content as they fail to distinguish one from the other. It will likely increase self-censorship by disincentivising people from sharing their own thoughts and creations for the fear of it being used or stolen by bots, or being found unpopular in an unknowingly fake environment.

In an extreme case scenario, the overcrowding of bots online may cause humans to stop using social media platforms as the social forums they were created to be. This would, indeed, mark the ‘death’ of the social media world we know today.