/* ---- Google Analytics Code Below */

Friday, July 27, 2018

Machines Interpreting Human Emotions

Helping Computers Perceive Human Emotions   In MIT News

MIT Media Lab researchers have developed a machine-learning model that takes computers a step closer to interpreting human emotions as naturally as people do.

In the growing field of "affective computing," robots and computers are being developed to analyze facial expressions, interpret a person's emotions, and respond accordingly. Applications include, for instance, monitoring an individual's health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions.

A challenge, however, is people express emotions quite differently, depending on many factors. General differences can be seen among cultures, genders, and age groups. But other differences are even more fine-grained: The time of day, the amount of sleep, or the level of familiarity with a conversation partner leads to subtle variations in the way a person expresses, say, happiness or sadness in a given moment.

Human brains instinctively catch these deviations, but machines struggle. Deep-learning techniques were developed in recent years to help catch the subtleties, but they're still not as accurate or as adaptable across different populations as they could be.

The Media Lab researchers have developed a machine-learning model that outperforms traditional systems in capturing these small facial expression variations, to better gauge mood while training on thousands of images of faces. Moreover, by using a little extra training data, the model can be adapted to an entirely new group of people, with the same efficacy. The aim is to improve existing affective-computing technologies. .... " 

No comments: