At The Common Good in the Digital Age conference last weekend, Pope Francis warned that artificial intelligence (AI) could, if unchecked by ethics, become “an enemy of the common good”.
But is his assessment of AI as a potential new “barbarism” a fair one?
Pope Francis on artificial intelligence: A new barbarism?
The Common Good in the Digital Age was a three-day conference held at the end of September in the Vatican, which saw academics and religious authorities discuss the social, ethical and political implications of recent technological developments.
As part of the conference, Pope Francis addressed diplomats, financiers and tech company executives, warning that the rush to develop artificial intelligence must also be accompanied with ethical evaluations of the common good to mitigate the risk of increasing social inequality.
In his speech, Pope Francis said:
“If technological advancement became the cause of increasingly evident inequalities, it would not be true and real progress.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“If mankind’s so-called technological progress were to become an enemy of the common good, this would lead to an unfortunate regression to a form of barbarism dictated by the law of the strongest.”
But is that a fair assessment? According to Tech Nation applied AI lead Harry Davies, to some extent, yes.
“Pope Francis is right to raise concerns about the social implications of technology, particularly AI,” he said.
“Whenever we utilise any new technology, there is a spectrum of possible outcomes – some good, some bad – but the potentially vast impact of AI upon society coupled with an exponential rate of change mean that we must endeavour to treat developments in AI with real responsibility.”
Applications of AI
Artificial intelligence has a great deal of potential to work toward the “common good” Pope Francis mentioned, mitigating human error and making everyday operations safer across a number of sectors.
AI has applications in multiple financial infrastructures, ranging from mundane interactions to multinational industries. In March 2019 the Oil and Gas Authority (OGA) launched the UK’s first National Data Repository, which uses AI to interpret over 130TB of reservoir and infrastructure data to support the UK’s energy transition efforts.
The medical sector could also benefit greatly from AI. Artificial intelligence algorithms have the potential to detect useful patterns based on millions of data points to help detect signs of disease early, and could assist in the development of new technologies and cures through analysing data faster than any human could.
A job creator
The development of AI has raised concerns about job loss and redundancy, but recent data suggests that the rise of AI could create jobs. The demand for AI comes with a demand for data scientists and engineers, and vacancies for these jobs have increased considerably in recent years.
The Harnessing the Power of AI: The Demand for Future Skills report, produced by global recruitment agency Robert Walters and market analysis company Vacancy Soft, predicts that the uptake of artificial intelligence (AI) is expected to create 133 million new jobs globally and “drastically change” the UK job market.
“Popular fear of AI and its algorithms reflects not only concerns about job loss, but also widespread anxiety that simply being human will lose value as our lives become patterned according to programs that tell us what we like and what to do, and over which we have no control,” says Guido Jouret, CDO of Swedish technology multinational ABB.
“I see things in a far more positive light. I work with AI every day, sometimes in the lab but more often in factories, the high seas, remote mines, the ocean floor, aircraft aloft, city transit systems, wilderness power stations and ordinary offices.
“Rather than a ‘menace to humanity’, I believe artificial intelligence represents a potential path to an upgrade that could be called ‘HumAIn’.”
This view is echoed by Davies.
“Though meant as a warning, not a proclamation of where we are today, it is premature to suggest that we are entering a new age of barbarism,” he says.
“For every opportunist, there are well-intentioned people building AI products that powerfully make the world a better place, be it diagnosing cancer earlier, combatting misinformation and fake news, or using AI to solve climate change. We must be careful not to throw the baby out with the bathwater.
“Equally, the oft floated criticism of huge job losses and mass inequality is not a forgone conclusion, though to avoid this scenario will require thorough thinking, reasoned debate, and creative ways of rethinking our approach to education and the economy.”
Limitations of AI
Despite the potential of AI to work for the common good, the technology still has its limitations as well as the ability to reinforce existing social inequalities without careful scrutiny and supervision.
“Like so many of the great challenges of our time – climate change, ageing, and others – this feels an inherently political one. Supranational and global cooperation will be paramount to avoid the ‘race to the bottom’ that Pope Francis highlights, where countries and corporations cut corners to stay ahead,” says Davies.
“This is not just theoretical. We already see the effects of malevolent technology where implemented in our society, be it the ‘deep fakes’ he highlights, pernicious uses of facial technology, and we all know the story of Cambridge Analytica.”
Artificial intelligence algorithms can cause problems through lacking human context. In June 2019 Uber’s AI algorithm sparked outrage by increasing trip prices during the terror attack on London Bridge, and YouTube’s content algorithms have been the subject of much debate for their propensity to recommend provocative and extremist content.
Bias concerns
As well as problems caused by a lack of human insight, AI created by humans can in turn reflect human biases.
Artificial intelligence being deployed in sectors such as employment, the justice system or even medicine could internalise and reinforce prejudices from their human programmers, with these biased calculations being given additional weight through preconceptions of AI as more “objective” than their human counterparts.
Increasing the influence of AI on society could also make a number of financial and social infrastructures vulnerable to cyber-attacks, and putting machine-learning algorithms in charge of important administrative duties could pose a threat to the common good if the AI’s learning process is exploited.
“Ethics in Artificial Intelligence is a hot topic, but it’s not necessarily a new topic, since AI can trace its roots back to 1956,” says Simon Driscoll, data and intelligence practice lead at IT services provider NTT DATA UK.
“As the use of AI becomes more prevalent, and in order to ensure that it is accepted in society, it’s important that the decision-making process is open, moral and explainable if it is to be trusted and accepted by humans.
“Artificial Intelligence is founded on data – it can only learn from the data it’s given, and it can only keep on learning if the data it receives continues to be good.
“If an AI-enabled machine makes a decision, there has to be a reason that can be rationalised by the person who created it and if any bias has seeped into this process, it is the human who is to blame, rather than the AI itself. Reliable and trustworthy data, validated by a human, is therefore a key requirement to ensure AI remains morally correct and useful in the future.”