The release of ChatGPT made AI a household name beyond the computer lab. Gaining over a million users in just five days, ChatGPT beckoned a new era of business in which AI was now a new necessity to boost revenue.
Over half of businesses who took part in a 2023 GlobalData survey answered that AI was the most disruptive and important theme in their sector. A ranking which has been held by AI since the third business quarter of 2021, just a year before ChatGPT’s release.
In this rush to integrate AI, are businesses paying enough attention to the algorithms they are using?
Biases are ingrained in algorithms
Across industries and throughout the public sector, AI has continually been implemented within the day-to-day. A recent investigation by The Guardian found that AI had been used by the UK government to answer benefits applications as well as flag potential “sham marriages” to the Home Office. This was despite the concerns that the use of AI could potentially risk misjudging applications or marriages based on biases within its algorithm.
Rena Bhattacharyya, chief analyst at GlobalData, explained the cause of biases.
“Although we may refer to biased algorithms, it’s not the algorithms themselves that are biased,” she explained, “Usually the problem is the data that is used to train the model is biased.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataLarge language models (LLMs), such as ChatGPT, are trained on datasets to analyse the language and syntax used in order to respond to prompts. Whilst LLMs or other generative AI may appear to understand your prompt or question in its generated answer, it is instead simply recreating what it thinks an appropriate response may be based on the data it has been trained on.
Not all AI bias is bad
Earlier this month at Google Cloud’s Next event, BT’s Chief Digital Officer Harmeen Mehta spoke on AI bias at a customer roundtable event. Mehta stated that AI was solely dependent on the data fed into it, but that was where she saw opportunity for businesses alongside ethical obligations.
“If we really want to embrace AI,” she began, “we’ve got to find a way of letting it breathe and letting it learn from different people other than us and sharing that knowledge.”
Mehta stated that all AI systems will be naturally biased towards the business they were designed for, and that algorithmic bias should not be categorised as an inherently negative trait. Expanding on this, she likened the data training process to making her AI biased towards her own company and lingo.
“I want my AI to have a slightly different slant. My customer type is going to be slightly different,” she stated, referring to the other customers present at the event.
Whilst Mehta’s comments are based upon the data that AI is trained on, Heather Dawe the UK head of data at information tech company UST had a more holistic view of where AI bias comes from.
Despite a large degree of bias pertaining to training data, Dawe stated that it was also important to examine the teams behind software. She reminded Verdict that potential prejudices within the human developers could also affect an AI software, which she states is a strong argument for diverse workforces.
“Biased AI systems can create negative feedback loops- prejudiced decisions that amplify existing inequalities,” Dawe elucidated, “… if job recommendation systems are trained on data has historically favoured male candidates over female candidates, the system is more likely to favour male over female candidates and can impact future hiring.”
Dawe’s concerns are already becoming a reality.
Just today (30 October), over 100 union groups and public sector officials signed an open letter addressed to the UK government warning that workers affected by AI would not be properly represented at the upcoming AI Safety Summit.
For millions of workers across the UK, reads the letter, the risk of being fired “by algorithm” or rejected for a loan based on identity is already the actuality.
Beyond the risk of profiling and amplifying existing oppression, biased AI could be bad news for the future of AI development.
“Biased AI can stifle innovation,” warns Thomson Reuters chief product officer for legal tech Kriti Sharma.l
“[Biased AI systems] exclude certain groups from the benefits of technology, leading to innovations that do not adequately meet the needs of marginalised people or create equitable solutions,” she explained, “We need diversity among those creating AI solutions to effectively help tackle bias.”
Sharma also advised that AI systems that are perceived as biased or prejudiced by users based on race, gender or age would be less likely to be used or liked. For the 28% of businesses who answered that they were very confident to integrate AI into their products in a 2023 GlobalData survey, accurately measuring and tackling AI algorithmic bias will need to be an integral practice.