Current and former employees from Google DeepMind and OpenAI have delivered a fresh warning that AI could lead to “human extinction”.

The open letter, signed by 11 staff members, alleged that unregulated AI could aid in the spread of misinformation and increase inequalities in the world, which could ultimately lead to eventual “human extinction”

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter said.

However, the letter warned that AI companies currently have “weak obligations” to share this information with governments and regulators about the capabilities of their systems.

OpenAI and Google DeepMind workers said AI companies can not be trusted to deliver this crucial information on their own accord. 

The letter also accused the structural and financial motives of AI companies of hindering effective oversight.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

“We do not believe bespoke structures of corporate governance are sufficient to change this,” the letter said.

The open letter called for AI companies to allow former employees to raise risk-related concerns to the public.

AI companies should “allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organisation with relevant expertise,” the letter said.

“So long as trade secrets and other intellectual property interests are appropriately protected,” it added.

The open letter called for AI companies to not retaliate “against current and former employees who publicly share risk-related confidential information after other processes have failed.”

GlobalData predicts that the overall AI market will be worth $909bn by 2030, having grown at a CAGR of 35% between 2022 and 2030.

In the GenAI space, revenues are expected to grow from $1.8bn in 2022 to $33bn in 2027, a CAGR of 80%.