Emotion detecting AI should not be allowed to make important decisions, AI research institute AI Now has warned.

This is one of the recommendations made in the New York University institute’s annual report intended to “ensure that AI systems are accountable to the communities and contexts they are meant to serve”.

Emotional AI, also known as affect recognition, uses artificial intelligence to analyse micro expressions with the aim of identifying human emotion.

A number of companies are commercialising affect recognition technology for a range of applications including recruitment, monitoring students in the classroom, customer services and criminal justice.

For example, HireVue offers software that screens job candidates for different qualities, and BrainCo is developing headbands that claim to detect students’ attention levels.

According to the report, the emotion-detection and recognition market was worth $12bn in 2018, and could grow to over $90bn by 2024. However, despite growing interest in the technology, AI Now warns that it is based on “markedly shaky foundations”.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

AI Now warns of affect recognition

Although these technologies have potentially useful applications, the report says that the technology is “at best incomplete and at worst entirely lack validity” failing to reliably identify emotions without considering context and often detecting facial movements which can be misinterpreted.

There is also evidence to suggest that the technology can show bias. For example, a study by Dr Lauren Rhue found that two emotion recognition programmes assigned negative emotional score to black individuals from a data set of 400 photos of NBA players.

The rapid growth of the technology is particularly concerning in circumstances such as criminal justice, with the institute calling for those deploying affect recognition to “scrutinise why entities are using faulty technology to make assessments about character”.

As well as calling for more stringent regulation, the report states that affect recognition should not play a role in decisions such as “who is interviewed or hired for a job, the price of insurance, patient pain assessments, or student performance in school”, with governments prohibiting its use in “high-stakes decision making processes”.


Read More: Behavioral Signals: AI that predicts if you’re going to buy from the emotion in your voice.