ChatGPT maker OpenAI’s safety leader Aleksander Madry is working on a new research project, CEO Sam Altman announced on Tuesday (23 July).
The news comes as OpenAI continues to buff up its safety team as concerns grow surrounding the risks associated with large language AI models.
“Aleksander is working on a new and (very) important research project,” Altman wrote on social media platform X.
Madry will be leaving his role in the company’s preparedness team temporarily and handing over duties to Joaquin Quinonero Candela and Lilian Weng.
“Joaquin and Lilian are taking over the preparedness team as part of unifying our safety work”, Altman wrote on X.
As AI becomes ubiquitous online and in business, countries around the world have rushed to regulate its usage.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataOpenAI, which is heavily backed by tech giant Microsoft, recently formed a Safety and Security Committee for its board members as the leading AI start-up works towards releasing its latest AI model.Â
The ChatGPT maker stated that its committee would help inform safety and ethical decisions ahead of its progress towards artificial general intelligence (AGI).
AGI is a theoretical form of AI where its knowledge surpasses that of a human.
According to research and analysis company GlobalData’s executive briefing on AI, the global AI market is set to exceed $1trn by 2030, achieving a compound annual growth rate of 39% from 2023.Â
In its 2024 tech sentiment polls, more than 20% of businesses answered that they already had a high adoption rate of AI into their workloads.