Welcome to Emotion AI.
GlobalData defines emotion AI as a type of AI that falls under ‘sentience’, one of five advanced AI capabilities. It is also known as ‘affective computing’, or ‘artificial emotional intelligence’, and uses a combination of natural language processing, sentiment analysis, voice emotion AI, and facial movement analysis.
Emotion AI can discern changes in tone and inflection and recognise when these become stress or anger-related. Such subtleties in human expression can therefore be identified, which might otherwise bypass human recognition.
Some argue that if machines are capable of empathy, they will understand and work with us more intelligently. Others, however, argue that developing emotion AI is like opening Pandora’s box.
Protecting the Olympics
Next summer, with Paris hosting the 2024 Olympics, hundreds of cameras with AI technology have already been installed to monitor crowds for suspicious behaviour. Events that would trigger alerts include abandoned bags, weapons, unauthorized access to restricted areas, and unusual crowd movements that could signal mass panic. Emotion AI, too, is being used to identify ‘dangerous people’ at border control stops in the US, Hungary, Latvia, and Greece.
It is important to note that individual faces will not be tracked or identified, and human officers will review alerts before acting. However, digital rights groups are protesting against this new system, fearing that this will give the green light for AI-powered surveillance to be used after the Olympics.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThere are also concerns over ‘predictive policing’, where algorithms are used to analyse massive amounts of data to predict and prevent potential future crime. AI risks automating discrimination and perpetuating biases by relying on historical data to identify areas and people likely to commit a crime. However, there will be many who will be willing to accept that the extra safety precautions that are being taken are worth the risk, given that this technology could have identified the truck used in the 2016 Nice terrorist attack.
In the automotive sector, an MIT-led team of researchers is working on integrating emotion AI with vehicles. In a vehicle, emotion AI could be used to identify elevated blood pressure due to anything from an argument with a passenger to an impending cardiac event and adjust the speed of the vehicle accordingly.
Employers, too, are using Emotion AI to evaluate prospective employees, scoring them on empathy and emotional intelligence. Companies such as HireVue claim they can analyse video interviews to figure out a candidate’s “employability score.” It is also being used in education, with schools trialling it to monitor pupil engagement levels.
Regulating emotion AI
Europe reached a historic agreement on its risk-based AI Act on December 8, 2023, following three days of negotiation and disagreement between member states over using AI to police crime.
Being the first continent to place human guardrails around AI has its pros and cons—it is likely the EU’s AI Act will have a global regulatory impact like GDPR, however, being the first to regulate AI could stifle innovation and possibly deter investors. Under the EU’s AI Act, AI-powered facial recognition for surveillance and law enforcement will be banned as an “unacceptable risk,” alongside AI systems or applications that “manipulate human behaviour to circumvent users’ free will.”
Police and national security bodies will be prohibited from using real-time, AI-powered biometric data without judicial authorization. Some of the most powerful AI systems, such as social credit scoring systems used in China, would be banned altogether. This applies to both public and private places, except in the event of specified serious crimes, such as a terrorist threat.
The restrictions imposed by the AI Act leave many potential applications for emotion AI untouched, such as mental healthcare. Twill, a therapeutic intelligence company, uses a chatbot trained to provide personalized care and support in an empathetic way.
A brave new world of emotion AI
AI technology is developing at a rate where government legislation cannot easily keep up. If, by training emotion into intelligent systems, machines come to understand humans, then some fear AI will manipulate our emotions.
Emotion markers are rarely universal. Many emotion-detecting systems are based on the now-dated work of psychologist Paul Ekman conducted in the 70’s. Subsequent research, including Ekman’s own, now supports the theory that people from different backgrounds express emotion in different ways. Another concern is that those not speaking in their first language, or a regional dialect might hesitate or mispronounce a word, which could be mistakenly identified as an emotional marker. Also, consumers must consent to being analysed by emotion AI, which further fuels privacy concerns.
For now, the immediate concerns are data privacy, bias, anti-trust, and misinformation. If data sets are biased, extrapolating to the larger population could have a detrimental effect by automating discrimination against already over-policed communities.
If the AI Act succeeds in steering the industry away from public policing and instead builds confidence in applications in which understanding of emotion is critical, it could have a transformative effect on the way we live and work. Trust and transparency begin at the data level. Eliminating explicit and implicit bias will be challenging, but necessary for emotion AI to have a future in our society.