A humanoid robot is displayed at the World Robot Conference held in Beijing, Aug 20, 2019. (Photo: VCG)
Artificial Intelligence, or AI, is fast-becoming an effective way to automate day-to-day decisions and is already saving us a lot of time as well as streamlining a plethora of industries in a manner not seen since the industrial revolution.
However, engineers, who are always on the hunt for advances in this impressive area of technology, are now seeking to make artificial intelligence more emotionally intelligent too.
The aim is not purely sentimental. Enabling AI to measure our emotions could be useful in a wide range of ways. For example, medical patients could be measured by machines able to instantly understand how much pain they are in. Or AI could help with employment vetting procedures, and in prison environments, not to mention by sensing when customers are reviewing products. The possibilities are endless.
Gauging another's emotions is a task that comes naturally to humans; looking at someone's face and reasoning how they feel. However, teaching machines to do the same thing without millions of years of advanced evolutionary psychological selection is by no mean feat. Companies including Amazon, IBM, and Microsoft have all created what they call "emotion recognition" algorithms, which see how people feel based on an analysis of their face. At the moment, this technology, at least in mainstream adoption, is in its infancy and relies on obvious basic cues, such as furrowed brows and frowns to indicate anger, and wide eyes to indicate fear.
Interestingly, this technology may not be limited to humans. It might be tweaked for other living beings. Animals also possess sophisticated body language that obviously differs from our own. For example, a smile baring teeth may, in humans, indicate friendliness but to a chimpanzee it might signal aggression and a willingness to fight and bite. A nod and a lowering of the head in humans may signal humbleness, but to a goat or sheep this might be a sign of aggression, stemming from the head-butting manner in which male rams fight for mating rights.
The opportunities for technology to analyze other species and learn more in scientific research and to monitor the wellbeing of livestock in the agriculture industry are all also, therefore, on the horizon.
The technology is still in its infancy, and is causing controversy, especially in the United States.
And the US AI think tank AI Now states that a large number of studies are indicating that, as of yet, there is little substantial evidence to indicate that there is a consistent and consistently predictable relationship between emotion and the way our faces look. Just as body language and facial features vary between species, they vary between people.
Experts suggest that the meaning of facial expressions vary across cultures, and even in specific situations. AI, for example, may have a hard time recognizing sarcasm, or someone joking, and the nuances of social situations.
The concept is exciting but, despite overzealous big tech companies investing a lot of money into the field, the technology may need to advance farther before it is taken seriously and, more importantly, trusted by AI watchdogs and think tanks.
This is not to say that AI emotion-detecting does not have an immediate future; plenty of research is going on into its application in the medical field, where deciphering pain and discomfort expressed by the face is a little easier. However, for now, its wider applications remain in their infancy, with many skeptical over the limits of the technology.
Whether the vast sums of money invested by big tech into this exciting area of research changes this within the next few years remains to be seen.