Christopher Nolan’s Oppenheimer is set against the backdrop of World War II, chronicling the intense race to develop nuclear weapons in the fight against the Nazis.
The film’s conclusion reveals that despite emerging as front runners in this pursuit, Oppenheimer and Einstein feared that the development of the nuclear bomb could initiate a chain reaction capable of destroying the universe—a revelation that left audiences in silence and fear. In the final seconds of the film, when reminded of his concerns that the bomb might destroy the world, Einstein asks, “What of it?” Oppenheimer succinctly replies, “I believe we did”.
Oppenheimer’s apprehension parallels the sentiment surrounding the current progress of artificial intelligence (AI)—software-based systems that use data input to make decisions alone. Modern AI has led to several alarming developments, particularly within the realm of political warfare. Examples include building autonomous weapons and AI-enabled chemical, biological, or nuclear weapons of mass destruction (WMD). A robust relationship between AI and WMDs could become one of the largest threats to humanity since the Nazi regime in 1933.
The world rise of AI
What some view as modest innovation has, in fact, served as a fundamental ingredient in today’s futuristic worldview of military technology. Oppenheimer’s invention of the atomic bomb triggered a chain reaction where states like China, Russia, North Korea, and the UK began developing their own nuclear bombs, such as the hydrogen bomb developed by the Soviet Union. A combination of mutually assured destruction and devastating consequences has deterred the use of such weapons, potentially undermining Oppenheimer’s prophecy. However, states have not shied away from exploring double-edged technologies like drones and AI to advance military objectives.
AI-based weaponry is already revolutionizing warfare in the modern era. AI enhances military strategy, tactics, and operations. Reports suggest that the Russian Ministry of Defense has used AI to analyze data for effective decision-making and began experimenting with autonomous weapons such as armed drones in 2018.
Furthermore, Ukraine used explosive drones to target Russian militants. Russia has already warned the UK that continuing to provide Ukraine with weapons and tactics against Russia may lead to the use of long-range storm shadow missiles, capable of destroying cities. Therefore, Russia would undoubtedly escalate the war against Ukraine and could potentially release killer robots—a lethal autonomous weapon—to perform an act of war.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe future threat
Recent developments in the AI market such as OpenAI’s ChatGPT—a large language model capable of generating text, images, code, and algorithms—have sparked a wave of investments from companies worldwide. According to GlobalData, the global AI market will be worth $908bn. by 2030, up from $81.3bn in 2022 and growing at a compound annual growth rate of 35.2%.
While ChatGPT does not have a clear role in developing WMDs, it created a renewed drive from states to explore the use of AI and its potential to cause harm. The internet has created doomsday scenarios due to AI’s newfound proficiency in seamless content production. One such scenario is governments or defence companies releasing an AI-enabled WMD. And this is not as far-fetched as it might sound. The Pentagon issued a green flag to the US Department of Defense, allowing for the exploration of AI in conjunction with devastating weaponry that could trigger the extinction of humanity, as WMD can kill millions of innocent civilians and, perhaps, entire continents.
AI itself may not necessarily contribute to destroying the world, but the potential of AI to cause damage in the hands of state actors raises concerns. Vladimir Putin ordered the Russian government to fund AI research in the race against the West in early September this year, while China intends to become a world leader in AI by 2030, recently constructing a “killer” four-legged robot that wields autonomous weapons.
Sam Altman, CEO of OpenAI, stated that the company is “a little scared of this”, while Putin declared that “Whoever becomes the leader in this sphere will become ruler of the world”. With research exploring the ability of AI to make kill shots all on their own, alongside the development of killer robots, it is not a matter of ‘if’ but when states will use such weapons on the frontline. Ultimately, the threat of AI-enabled mass destruction lies at the feet of the first state to deploy the technology.