Ex-Google CEO, Eric Schmidt, has claimed many people could be “harmed or killed” by artificial intelligence (AI) if it is not regulated properly.
Speaking at the Wall Street Journal’s CEO Council Summit in London on Wednesday, Schmidt warned AI could pose “existential risks” if governments did not stop it from “being misused by evil people”.
“And existential risk is defined as many, many, many, many people harmed or killed,” Schmidt said.
AI, more specifically generative AI, has been rocketing in popularity since the release of OpenAI’s ChatGPT in November 2022.
The buzz around the program has forced other businesses to race their own versions out into the world – with some receiving criticism on release.
Google’s rival to ChatGPT, Bard, featured a factual error in its first demo – and was also criticised for sending inappropriate responses to users.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataExperts have spoken out about the risk ChatGPT poses to cybersecurity, including how it has the ability to aid hackers in writing malicious code.
Continuing on the topic of AI, the ex-Google CEO said: “There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology.
“Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.”
Brian Mullins, CEO of AI scale-up Mind Foundry, has spoken out in response to the stark warning from Schmidt.
Mullins told Verdict: “I think it’s important to look at the current conversation around AI and the level of hyperbole that is leading to irrational and unproductive discourse that won’t actually lead to solutions to real-world problems.”
The Mind Foundry CEO urged that when tackling the risks around AI, “we need people to take action with rational judgement and understanding.”
The comments from the ex-Google CEO follow a recent open letter calling for a sixth-month pause to AI development, signed by Elon Musk and 1,000 other tech leaders.
In an open letter, by the Future of Life Institute, a non-profit partly funded by the Musk Foundation, the group wrote: “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?” the letter read.
GlobalData is the parent company of Verdict and its sister publications.