/* ---- Google Analytics Code Below */

Tuesday, April 03, 2018

Computing and Emotional Intelligence

We worked with MIT on related systems.  Followed Rosalind Picard.  Most recently looking at what Watson has done with related skills emotion and personality detection.   Consider deriving personality as a means of understanding emotion.   Obvious connections to current assistants.  How  can such systems both express and detect emotions? Good overview article on the challenge.

Artificial (Emotional) Intelligence    By Marina Krakovsky 
Communications of the ACM, Vol. 61 No. 4, Pages 18-19     10.1145/3185521

Anyone who has been frustrated asking questions of Siri or Alexa—and then annoyed at the digital assistant's tone-deaf responses—knows how dumb these supposedly intelligent assistants are, at least when it comes to emotional intelligence. "Even your dog knows when you're getting frustrated with it," says Rosalind Picard, director of Affective Computing Research at the Massachusetts Institute of Technology (MIT) Media Lab. "Siri doesn't yet have the intelligence of a dog," she says.

Yet developing that kind of intelligence—in particular, the ability to recognize human emotions and then respond appropriately—is essential to the true success of digital assistants and the many other artificial intelligences (AIs) we interact with every day. Whether we're giving voice commands to a GPS navigator, trying to get help from an automated phone support line, or working with a robot or chatbot, we need them to really understand us if we're to take these AIs seriously. "People won't see an AI as smart unless it can interact with them with some emotional savoir faire," says Picard, a pioneer in the field of affective computing.

One of the biggest obstacles has been the need for context: the fact that emotions can't be understood in isolation. "It's like in speech," says Pedro Domingos, a professor of computer science and engineering at the University of Washington and author of The Master Algorithm, a popular book about machine learning. "It's very hard to recognize speech from just the sounds, because they're too ambiguous," he points out. Without context, "ice cream" and "I scream" sound identical, "but from the context you can figure it out."... "

No comments: