Artificial intelligence (AI) has quickly emerged as the defining technology of our society. Gartner forecasts that by 2020, AI will be a top-five investment priority for more than 30% of CIOs. UK tech companies have secured £1 billion deal investment from the UK government which will support their research and development.
Though the discussion of adopting AI is still very compelling for businesses and consumers alike, there are still legitimate concerns, as raised by The Guardian’s Inequality Project: “When the data we feed the machines reflects the history of our own unequal society, we are in effect asking the program to learn our own biases.” It has, therefore, become paramount that CIOs know some uses of AI that could cause problems – the bad, the biased and the unethical – and what they can do to make sure their business remains on the right side.
AI bias Education
In what may well be the earliest reported instance of a tainted system, a 1979 program created by an admissions dean at St. George’s Hospital Medical School in London ended up accidentally excluding nearly all minority and female applicants. By 1986, staff members at the school became concerned about potential discrimination and eventually discovered that at least 60 minority and female applicants were unfairly excluded each year. The prestigious British Medical Journal bluntly called this bias “a blot on the profession”, but the question is why it took so long to flag. Ultimately, the school was mildly penalised, and it did offer reparations, including admitting some of those applicants who were excluded.
Mortgage lending bias
A more modern example is in mortgages. With the advent of AI, the mode of lending discrimination has shifted from human bias to algorithmic bias. A study co-authored by Adair Morse, a finance professor at the Haas School of Business, concluded that while “people writing the algorithms intend to create a fair system, their programming is having a disparate impact on minority borrowers — in other words, discriminating under the law”.
Redlining, the systematic segregation of non-white borrowers into less-favourable neighbourhoods by banks and real estate agents, is seemingly not a thing of the past. Surprisingly, the automation of the mortgage industry has only made it easier to hide redlining behind a user interface.
In 2000, Wells Fargo created a website to promote mortgages using a “community calculator” that helped buyers find the right neighbourhood in the US, according to Bruce Schneier. By collecting the users postcode, assumed race (according to the demographics of their current neighbourhood), the calculator recommends similar neighbourhoods to their own. And earlier this year, HUD brought a lawsuit against Facebook for racial biases in housing and mortgage advertisements.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataHuman resources
By far the most infamous issue with bias in recruiting and hiring came to public attention when Reuters reported that Amazon’s new recruiting engine excluded women. Amazon assembled a team in 2014 that used more than 500 algorithms to automate the CV review process for engineers and coders. The team trained the system by using the resumes of members of Amazon’s software teams – which were overwhelmingly male. Consequently, the system learned to disqualify anyone who attended a women’s university or who listed women’s organisations on their resume.
More and more companies are adopting algorithmic decision-making systems at every level of the HR process. As of 2016, 72% of job candidates’ resumes are screened not by people, but entirely by computers. That means job candidates and employees will be dealing with people less often – and stories like Amazon’s could become more common.
The good news is that some companies are making efforts to eliminate potential bias. There is Yva.ai, an analytics platform that avoids the use of any indicator that could lead to bias, such as gender, age, or race, even when such indicators are primary (such as involvement in women’s activities or sports), secondary (such as names or graduation dates), or even tertiary (such as attendance at elite universities, which has been increasingly called out as a signifier of bias against minorities).
Also, LinkedIn has deployed systems not to ignore but instead to collect and utilise gender information in LinkedIn profiles. LinkedIn then uses this information to classify and correct for any potential bias. Search also plays into the problem: Google AdWords has been guilty of bias, when researchers from Carnegie Mellon University and the International Computer Science Institute discovered that male job seekers were more likely to be shown advertisements for high-paying executive positions than women.
The CIO’s role in preventing AI bias
Leading tech companies are increasingly addressing the ethical use of data. Microsoft has illustrated the importance of ethics by developing a set of six ethical principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. All tech leaders would be wise to take a leaf out of their book. Technology developed must also be in accordance with international laws. This year’s G20 Summit finance ministers agreed, for the first time, on G20’s own principles for responsible AI use. This included a human-centric AI approach, laying out that countries must ensure the use of AI respects human rights and shares the benefits it offers.
If AI is to be bias-free, companies must support a holistic approach to AI technology. AI is only as good as the data behind it, so this data must be fair and representative of all people and cultures.
On the most simplistic level, CIOs need to question if the AI applications they are building are moral, safe, and right. Is the data behind your AI technology good, or does it have algorithmic bias? Are you vigorously reviewing AI algorithms to ensure they’re properly tuned and trained to produce expected results against pre-defined test sets? Are you adhering to transparency principles (such as GDPR) in how AI technology impacts the organisation internally and customers and partner stakeholders externally? Have you set up a dedicated AI governance and advisory committee that includes cross-functional leaders and external advisers that will establish and oversee the governance of AI-enabled solutions?
Ultimately, businesses have a legal and moral obligation to use AI ethically – but it’s also a business imperative. No CIO wants to be known for bad and biased use of AI.
Read More: Businesses must monitor AI bias more closely: CBI.