In cybersecurity, red team professionals are tasked with finding vulnerabilities before they become a problem. In artificial intelligence, flaws such as bias often become apparent only once they are deployed.
One way to catch these AI flaws early is for organisations to apply the red team concept when developing new systems, according to techno-sociologist and academic Zeynep Tufekci.
“Get a read team, get people in the room, wherever you’re working, who think about what could go wrong,” she said, speaking at Hitachi Vantara’s Next conference in Las Vegas, US, last week. “Because thinking about what could go wrong before it does is the best way to make sure it doesn’t go wrong.”
Referencing Hitachi CEO and president Toshiaki Higashihara description of digitalisation as having “lights and shadows”, Tufekci warned of the risks associated with letting the shadowy side go unchecked.
AI shadows
One of these “shadows” is when complex AI systems become black boxes, making it difficult even for the AI’s creators to explain how it made its decision.
Tufekci also cited the example of YouTube’s recommendation algorithm pushing people towards extremism. For example, a teenager could innocently search ‘is there a male feminism’ and then be nudged towards misogynistic videos because such controversial videos have received more engagement.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAnd while data can be used for good, it can also be used by authoritarian governments to repress its citizens, or by election consultancies to manipulate our votes.
Then there are the many instances of human bias finding their way into algorithms. These include AI in recruitment reflecting the sexism of human employers or facial recognition not working for people with darker skin.
“If the data can be used to fire you, or to figure out protesters or to use for social control, or not hire people prone to depression, people are going to be like: ‘we do not want this’,” said Tufekci, who is an associate professor at the UNC School of Information and Library Science.
“What would be much better is to say, what are the guidelines?”
Using a red team to enforce AI ethics guidelines
Some guidelines already exist. In April 2018, the European Union’s High-Level Expert Group on AI presented seven key requirements for trustworthy AI.
These requirements include human oversight, accountability and technical robustness and safety. But what Tufekci suggests is having a team of people dedicated to ensuring AI ethics are adhered to.
“You need people in the room, who are going to say there’s light and there are shadows in this technology, and how do we figure out to bring more light into the shadowy side, so that we’re not blindsided, so that we’re not just sort of shocked by the ethical challenges when they hit us,” she explained.
“So we think about it ahead of time.”
However, technology companies often push back against regulation, usually warning that too much will stifle innovation.
“Very often when a technology is this new, and this powerful, and this promising, the people who keep talking about what could go wrong – which is what I do a lot – are seen as these spoilsport people,” said Tufekci.
“And I’m kind of like no – it’s because we want it to be better.”
Read more: Tom Van de Wiele Q&A: Reflections of a red teamer