Saw some of this in our own chatbot testing. Also famously mentioned in studies of how people react to obviously non-human images in chat interactions, like in the Media Equation. Is the testing of this idea reasonably designed here?
Sometimes, Computer Programs Seem Too Human for Their Own Good
In The Economist
Researchers at Chungbuk National University in South Korea say they have demonstrated that increasingly human-like machines can invoke feelings of embarrassment in people, making some users hesitant to use assistive artificial intelligence. One experiment involved almost 200 volunteers who initially believed intelligence to be unchangeable, but who felt more embarrassed and incompetent after tests in which they were presented with 16 sets of three words and attempted to think of a fourth word that linked them, with half of the cohort given hints accompanied by an anthropomorphic computer-shaped icon. A second experiment permitted a different set of participants to ask for help rather than having it forced on them at random, which led to similar results. The researchers concluded some people appear to want to avoid losing face by seeking help from an anthropomorphic icon, suggesting there are situations in which the aggressive pseudo-humanization of machine-human interactions could usefully be reduced. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment