Toju Duke, former responsible artificial intelligence (AI) programme manager at Google, told Verdict that companies using AI or looking to adopt it are lagging behind the research community on understanding the responsible use of the powerful technology.

In her previous role at Google, Duke oversaw the management of responsible AI development across the company’s product and research teams.

With rapid advances in AI, experts have warned that it has the potential to harm or even pose an existential risk to humanity.

Experts in the field of ethical AI regulation, including Duke, have been calling for tighter regulation for years. In 2023, the US and EU made moves to create stronger governance around companies’ treatment of the technology.

In June 2023, the EU passed a draft law named the AI Act to control the use of potentially harmful AI tools such as facial recognition software.

Amazon, Google, Meta and Microsoft have accepted a non-binding agreement set out by the White House over the development and release of AI systems.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Ethical AI needs to be addressed faster

While Duke celebrates the moves, she says that companies developing the powerful technology need to catch up with the research community in understanding how to implement practices to develop “responsible” AI.

In her book, ‘Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness’, Duke explains that the fundamental issue lies with the use of open-source data AI models are trained on, which is riddled with harmful messaging.

Duke explains that these large language models (LLMs) display bias, which pervades the AI applications in use today.

As an example of the real-world harm AI can cause, Duke emphasises how bias can change how computer vision interprets images. Duke explains that both Google and Microsoft had to retract the use of their facial recognition AI software because it misidentified images of people with darker skin.

In fact, the tech giants removed the use of AI for surveillance and military purposes. To address the misidentification of subjects by its AI programme due to bias, Google released a skin tone scale as an open-source tool to recognise a wide array of skin colours.

Google decided to stop selling its facial recognition software through its cloud Application Programming Interface. However, the company continues to use it in products such as Google Pixel, Nest and photos.

In 2022, Microsoft removed its AI-based cloud software feature, designed to interpret a subject’s mood, gender, and age, amongst other characteristics, from its Azure Face API AI service.

Microsoft discovered that the tool showed inaccuracies and racial bias. However, the AI tool was kept avaliable for its app named “Seeing AI”, which aids customers with visual impairments.

“AI is becoming increasingly popular for its incorrect and harmful results across the different modalities, from language, image, speech, and so on,” Duke said.

Removing AI bias is possible

In her book, Duke illustrates that the key issue with bias does not lie with AI itself but with its handling: “AI is not 100% evil. It’s not out to erase all of humanity. It has many benefits and it’s progressively becoming an integral part of existing technologies,”

“In other words, it’s a necessary evil—sorry a necessary beneficial technology—that
we all need to adjust to, understand, and above all, ensure its safe deployment, use, and adoption,” Duke elaborates.

Duke suggests businesses take what she calls “baby steps” towards achieving a more ethical version of AI that is currently available to the public and strive to integrate the values that regulation asks of AI developers.

These include fairness, which she says could be built into AI models and applications, by using synthetic data to train models. She recommends companies implement transparent practices such as uploading data sets in a public directory to adhere to regulatory laws.

Duke adds that firms should have benchmarks in line with the possible uses of their technology, which their AI models and applications should be run against to test for bias.