Google artificial intelligence (AI) pioneer, Geoffrey Hinton, has left the company so he can speak freely about, what he views, as the technology’s potential threat to humanity.
Hinton spoke with the New York Times about his mission to warn the world of the potential threats posed by AI which he believes may appear sooner than anticipated.
Hinton’s academic expertise in AI spans 45 years and he is widely considered one of the most respected people working in the field of AI.
According to Hinton, the short-term harm of AI includes its potential for misinformation through AI-generated fake photos, videos and text as well as its potential to replace human work.
But the Google Engineering Fellow’s primary concern is around the emerging power differential between digital intelligence and human intelligence. Of particular concern is the power of large language models such as OpenAI’s ChatGPT which has emerged as the fastest adopted app in history, attracting 100 million users since its launch in November, 2022.
However, GlobalData analyst Joseph Bori says Hinton’s concerns are more related to the potential bad use of AI rather than its rapid progress, which as a scientist who pioneered its development, he is likely in favour of.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“The fundamental issue here seems to be that neither industry self-regulation nor coordinated global regulation has emerged at pace with the advances in AI. As such, we are in the difficult position now whereby AI is advanced enough to cause real damage, yet no way to regulate it,” says Bori.
In the past Mr Hinton has raised ethical questions around AI, especially its co-optation for military purposes. This is especially a concern in the US where a lot of research funding is provided by the US Department of Defense, according to Bori.
Google’s position on this topic has long been complex, and in 2016 the company agreed to sell its Boston Dynamics division, which has produced robots such as Atlas, which could eventually be used in the battlefield. “Whether the driver was just the division’s profitability or ethical concerns with their work, has always been a bit unclear in our view,” says Bori.
According to GlobalData, further complicating matters is the ongoing trade dispute between the US and China, particularly affecting AI technologies and research. “In this context it will be quite unlikely that coordinated regulation will be agreed, and rather a race to dominate the AI field, and particularly its military uses, will take place,” adds Bori.
Alan Vey, Forbes 30 under 30, and CEO and founder of blockchain protocol Aventus Network believes that there is a high degree of certainty that eventually machines will be more intelligent in many ways than humans. “It’s just a question of how long this will take and what this intelligence ends up being used towards, intentionally and unintentionally, “ says Vey.
According to Vey, Geoffrey Hinton’s perspective is understandable, and bringing awareness to the potential dangers of AI is crucial, but efforts to stop its development are somewhat futile.
Hinton isn’t the only one – Apple co-founder Steve Wozniack and Tesla founder Elon Musk recently signed a public letter calling for pause in AI development. “But the cat is out of the bag – even if research into artificial general intelligence (AGI) is halted publicly , it’s likely there will be research continued behind closed doors,“ adds Vey.