Artificial intelligence (AI) may eventually cause humanity’s extinction, according to a worrying warning from experts. 

A group of leading CEOs, engineers and researchers have released a 22-word-statement explaining the fatal threat AI possesses.

“Mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” it read.

Published by the Centre for AI Safety, a US-based non-profit; the letter has been signed by industry heavyweights including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis. 

The open letter is the latest warning surrounding AI innovation of the year signed by some of the same industry figureheads. 

At the beginning of the year, the Future of Life Institute posted a statement calling for a six-month pause to AI development.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The previous letter questioned if society should “develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us”.

Executive director of the Centre for AI Safety, Dan Hendrycks, said the most recent warning has been made as concise as possible to avoid any pushback.

Hendrycks told the New York Times: “We didn’t want to push for a very large menu of 30 potential interventions. When that happens, it dilutes the message.”

The executive director said the statement was a “coming-out” message for many engineers, researchers and CEOs in the industry.

“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks said. 

“But, in fact, many people privately would express concerns about these things.”

The recent warning comes with a pre-statement from the Centre of AI Safety, which explains “it can be difficult to voice concerns about some of advanced AI’s most severe risks.”

The organisation added that the statement aims to “open up discussion”.

GlobalData is the parent company of Verdict and its sister publications.