ChatGPT creator, OpenAI, has outlined a strategic approach to address the potential dangers of AI in a blog post yesterday (18 Dec), including the risk of supplying information on how to construct chemical and biological weapons to cyber criminals.

The newly established “preparedness” team will be led by MIT AI professor Aleksander Madry and will comprise AI researchers, computer scientists, national security experts, and policy professionals.

Madry, a seasoned AI researcher leading MIT’s Center for Deployable Machine Learning and co-leading the MIT AI Policy Forum, was among the group of OpenAI leaders who resigned when Altman faced dismissal by the board in November. Mandry returned to the company following Altman’s reinstatement.

The team’s mandate is to monitor evolving technologies, conduct continuous assessments and provide timely warnings in the event AI poses a danger.

The preparedness team will be dedicated to mitigating biases in AI, and incorporate a superalignment team, which explores safeguards against potential future scenarios of AI surpassing human intelligence.

Google and Microsoft have both previously issued warnings regarding the existential threats posed by AI, likening them to the severity of pandemics or nuclear weapons.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

In April, Elon Musk, Twitter CEO and OpenAI co-founder, called for a six-month break on the development of AI systems more powerful than GPT-4, warning of a substantial risk to society.

At the UK’s landmark AI Safety Summit in November, Prime Minister Rishi Sunak attempted to assuage fears of AI’s potential dangers, following a UK government report which claimed generative AI could be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons.”

The report also warned that AI could also make it harder to trust online content and increase the risk of cyber-attacks by 2025.

Yet a growing faction of AI business leaders argues that concerns are exaggerated and that efforts should focus on leveraging technology for societal improvement and financial gain.

Meta’s president of global affairs and former UK deputy prime minister Nick Clegg compared the discourse surrounding AI to the “moral panic” over video games in the 80s.

OpenAI claims a balanced stance on the potential dangers of AI. Altman has acknowledged the long-term risks associated with AI while emphasising the importance of addressing current issues. He has publicly advocated for regulation to prevent the harmful aspects of AI but cautioned against measures that hinder the competitiveness of smaller companies.