The earliest stories of an automaton, a bronze giant named Talos, document our ambitions to give life to beings other than ourselves, and for these beings to act instinctively to keep us safe from harm.
As Talos ran through the island of Crete, protecting its inhabitants from unwelcome visitors, so does our artificial intelligence (AI) scour machines to safeguard us from malware, and records satellite images of Earth to support environmental preservation efforts.
Yet, many view AI as ‘other’—a cold entity that must be controlled. The Center for AI Safety helped propagate this outlook when it released a statement last year, declaring that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Part of the fear relating to AI comes from the unknown, specifically the hypothetical scenario in which AI becomes smarter than humans (also called ‘superintelligence’).
Governing superintelligence
The problem we are encountering is thus: how do we begin to approach the governance of superintelligence in a manner that does not stifle its capabilities, yet ensures that we are protected from harm? This is a question that the leading creator of artificial intelligence software, OpenAI, is yet to conclusively answer itself, stating that “[the governance of] superintelligence will require special treatment and coordination”.
Efforts to govern the artificial intelligence in existence today are already underway. The most eminent legislation on AI to date, the European Union’s AI Act, has drawn criticism for what some commentators see as unjustifiable leniency regarding some aspects of AI, with the European Centre for Not-for-Profit Law arguing that the Act “leaves significant gaps and legal uncertainty”. However, a more stringent clampdown on artificial intelligence may arguably stifle what the United Nations calls “the tremendous potential for good” posed by AI.
Nurturing artificial intelligence
As superintelligence appears on the horizon, a softer approach to governance is arguably needed for us to realise all its benefits. This approach can be aided if we begin to see AI as similar to us, rather than other: intelligent “beings” that exist to enrich our lives. Yet, we do not aim to stifle intelligence in humans. We nurture and reward it, through advanced degrees, prizes and scholarships, and the like. The human mind has made contributions such as algebra, the wheel, and nuclear fission to society as a result of this approach. The same approach should be offered to AI.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataWe have already begun to see how AI can benefit the world, enriching industries from healthcare and agriculture to space exploration and sport. A superior version of this technology could exponentially expand on this good and should not be inhibited beyond what is necessary for our safety.
As parents, we may celebrate our children excelling beyond what we were capable of. As teachers, we prize students that have become ‘the master’. As humans, we may revel in the unprecedented potentiality of our creation becoming smarter than we are. We should not govern to stifle it. We should govern to nurture, reward, and reap the benefits of a creation that OpenAI believes will offer us a “dramatically more prosperous future.”