A researcher from Queen’s University Belfast has developed an algorithm that could help address the issue of artificial intelligence (AI) bias.

Although AI has many applications, it also brings the risk of bias. As AI is trained using large volumes of data, if this data contains human biases, AI algorithms will make connections based on this. For example, if shown images of doctors that are predominantly male, AI will learn that doctors are less likely to be female.

This creates a significant issue when the technology is used in recruitment, insurance or policing, as there is a danger of it reinforcing existing bias rather than help eliminate it.

Last year, it emerged that an algorithm used in the US healthcare industry to predict patient risk demonstrated bias against black patients.

Dr Deepak Padmanabhan from the School of Electronics, Electrical Engineering and Computer Science and the Institute of Electronics, Communications and Information Technology worked with experts from the Indian Institute of Technology Madras to address an aspect of this issue.

Clustering algorithms and AI bias

When AI analyses large volumes of data, it groups data based on common characteristics, known as clustering algorithms. However, this process can often be biased in terms of race, gender, age, religion and country of origin.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The researchers looked at the use of this algorithm-based technique in shortlisting candidates in job recruitment, and how it can perpetrate AI bias.

Padmanabhan said:

“When a company is faced with a process that involves lots of data, it is impossible to manually sift through this. Clustering is a common process to use in processes such as recruitment where there are thousands of applications submitted. While this may cut back on time in terms of sifting through large numbers of applications, there is a big catch. It is often observed that this clustering process exacerbates workplace discrimination by producing clusters that are highly skewed.”

Studies suggest that white-sounding names received 50% more call-backs than those with black-sounding names, and candidates over 40 receive fewer call-backs.

To try and prevent discrimination such as this from occurring, Dr Padmanabhan has created a fair clustering algorithm called FAIRKM. This improves upon previous attempts to create fairer clustering algorithms as they have tended to focus on one attribute, such as gender.

Padmanabhan said:

“Our fair clustering algorithm, called FairKM, can be invoked with any number of specified sensitive attributes, leading to a much fairer process.

“FairKM can be applied across a number of data scenarios where AI is being used to aid decision making, such as pro-active policing for crime prevention and detection of suspicious activities. This, we believe, marks a significant step forward towards building fair machine learning algorithms that can deal with the demands of our modern democratic society.”


Read more: IBM counters bias claims with release of facial recognition dataset.