Tech leaders, including Elon Musk, have urged artificial intelligence (AI) developers to pause the AI race as systems more powerful than ChatGPT-4 could put civilisation at risk.
A Future of Life Institute open letter, signed by tech leaders including Pinterest co-founder Evan Sharp and Tesla CEO Elon Musk, advised that companies developing AI should immediately pause development for at least six months.
The race to deploy and develop AI systems that, at this point, not even their creators can understand will put humans in danger.
Alec Boere, associate partner for AI and automation, Europe, at Infosys Consulting, told Verdict: “When implementing AI models, responsibility should be at the forefront of the enterprise, placing a particular focus on the five core pillars of trust.
“Whilst OpenAI has opened the ChatGPT door, greater controls need to be implemented, allowing for the management of data sources and more guardrails to ensure trust,” says Boere.
“To help maintain this trust, every organisation should have policies to ensure they are being AI responsible,” Boere continues.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAI companies are being advised to jointly develop and implement a set of shared safety protocols for advanced AI design and development.
Even if the plea for a pause on the AI race is ignored, governments should step in and follow examples like the UK’s recent AI white paper and implement more rigid regulations.
“It’s important that governments are given the time to provide clarity not only on the direction of travel for regulation in this space but also in a way that promotes investment – and quickly,” says partner at transatlantic law firm Womble Bond Dickinson, Alastair Mitton.
“There is no doubt that AI presents some very specific risks, from compromising privacy and human dignity to damaging property and mental health.
It will remain an uphill battle to regulate AI in a way that ticks all the boxes. It is practically impossible to set out rules for something that is developing at this speed,” he continues.