Wondering if the Prisoners Dilemma is the right thing to use here, but agree that showing typical human emotions can bring sympathy to a machine as it can to humans. Context dependent though, and the PD is an contrived context. Not a big enough sample either. Also its interesting that these are specifically 'avatars', or human-like agent. Would it be different if was not an avatar? Like that the test is being done though. Perhaps one way of certifying AI agents.
Making AI More Human
University of Waterloo News
A study by researchers at the University of Waterloo in Canada found that adding appropriate emotions to artificial intelligence (AI) avatars would make humans more accepting of them. Waterloo's Moojan Ghafurian, Neil Budnarain, and Jesse Hoey employed the classic Prisoner's Dilemma game, substituting one of two human "prisoners" with a virtual AI developed at the University of Colorado, Boulder. The researchers utilized virtual agents that evoked either neutrality, appropriate emotions, or random emotions. Participants cooperated 20 out of 25 times with the AI that exhibited human-like emotions, 16 out of 25 times for the agent with random emotions, and 17 out of 25 times for the emotionless agent. Ghafurian said, "Showing proper emotions can significantly improve the perception of humanness and how much people enjoy interacting with the technology." ... .'
Friday, May 24, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment