Despite growing concerns and public attention surrounding AI ethics, only 25% of those involved in the development and deployment of AI are prioritising the issue, according to research published by PwC.
In a survey conducted to coincide with the launch of the company’s Responsible AI Toolkit, the company also found that just 20% of organisations had clear processes for identifying risks with AI.
60% instead either rely on informal processes, developers or have no documented procedures at all.
Where companies did have frameworks or considerations in place for AI ethics, PwC found that there was significant inconsistency in how it was enforced.
Lack of AI ethics plans a challenge for the C-suite
This is also having an impact on how companies respond to issues with their AI. 56% of those surveyed said they would find it difficult to explain what the cause of wrong actions by an AI were, while 39% were only “somewhat” sure that they knew how to stop their AI if something did go wrong.
For PwC, the research highlights the need for the C-suite to take responsibility for how organisations handle AI.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalData“AI brings opportunity but also inherent challenges around trust and accountability. To realise AI’s productivity prize, success requires integrated organisational and workforce strategies and planning,” said Anand Rao, Global AI Leader, PwC US.
“There is a clear need for those in the C-suite to review the current and future AI practices within their organisation, asking questions to not just tackle potential risks, but also to identify whether adequate strategy, controls and processes are in place.”
This is particularly important given the negative business impact inadequate AI ethics can have on an organisation.
“The issue of ethics and responsibility in AI are clearly of concern to the majority of business leaders,” said Rao.
“The C-suite needs to actively drive and engage in the end-to-end integration of a responsible and ethically led strategy for the development of AI in order to balance the economic potential gains with the once-in-a-generation transformation it can make on business and society. One without the other represents fundamental reputational, operational and financial risks.”
Read more: Friend or foe? Here’s how experts expect robotics to progress over the next decade