A new form of artificial intelligence (AI) software is now able to read our emotions. It aims to build emotional understanding and consciousness into the AI software and to benefit consumers by having machines be made more sensitive to our needs.

A database is developed where human employees categorise different images based on their emotions. They also seek to be culturally sensitive by having Europeans tagging Europeans and Asians tagging Asians etc.

Currently, emotion AI is limited to the advertising and entertainment industries, where tech companies are able to analyse audience reactions to advertisements. At Piccadilly Circus in London – one of the biggest advertising spaces in the world –  the emotion AI technology is able to track the profiles of those who are looking at the large advertising screens and thus modify their advertising interfaces based on what is more appealing to onlookers.

Major technology giants like Amazon, Facebook and Google are now funnelling attention and investment into emotion AI. We are thus likely to see more emotion AI products in the near future where these organisations would be more adept in understanding human emotions digitally.

Read the full article on The Guardian: AI can read your emotions. Should it?

Analysis:

What is the impact of emotion AI and will it benefit our lives or harm it? On one hand, such technology can pick up when individuals are distressed or depressed and thus remind people to seek help or to alert emergency services. On the other hand, it may lead to us being profiled by employers, insurance companies or immigration. Imagine if we would have to pay higher premiums just because we smiled less than average and thus demonstrated a tendency towards lower health immunity?

At the same time, can we trust technology companies to make the ethical choice for consumers when there is a promise of large financial rewards for doing otherwise? Emotion AI may help organisations increase their revenue: for MotherCare, they discovered that greeting shoppers with a smile at the door led to increased spending. What if these technology led to greater surveillance and impacted the life decisions made about particular persons?

Finally, how reliable can emotion AI be when our faces need not necessarily represent our emotions? If we have a scowl on our face, it may be due to a headache rather than because we are upset. It may be of less importance if this information is used to analyse the attractiveness of advertisements, but what if it is used, say to deny someone’s entry into a country?

Questions for further personal evaluation: 

  1. How happy are you to be the subject of emotion detecting? Why? 
  2. Is it problematic that onlookers at Piccadilly Circus do not explicitly consent to their faces being recorded? Does their being there implicitly express consent?


Useful vocabulary: 

  1. ‘adept’: very skilled or proficient at something
  2. ‘dystopian’: relating to an imagined place or state where everything is unpleasant or bad; typically a totalitarian or environmentally degraded one