/* ---- Google Analytics Code Below */

Monday, May 24, 2021

Stanford Chatbot Study

Mostly obvious results, but useful characterizations.   We leveraged a 'concierge' model being most important, how do you get people to the right humans, the ones with the best answer in context?   I also recall also including a 'competence in context' rating we also measured.   We also tried to measure 'ongoing engagement', which was useful for future marketing connections

Do chatbots need to be more likable?    by Tom Ryan in Retailwire

A new Stanford university study finds people will more readily use a chatbot if they perceive it to be friendly and competent and less so if it projects overconfidence and arrogance. The challenge, the authors say, is finding the right balance.

Across three studies with 300 participants in the U.S., researchers tested reactions to AI-bots with the same underlying functionality but different descriptions.

Among the findings:

Low-competence descriptions (e.g., “this agent is like a toddler”) led to increases in perceived usability, intention to adopt and desire to cooperate relative to high-competence descriptions (e.g., “this agent is trained like a professional”). 

People are more likely to cooperate with and help an agent that projects higher warmth (e.g., “good-natured” or “sincere”).

Descriptions “are powerful,” helping drive user adoption and engagement with chatbots.

The authors suggested chatbots need to instill confidence that they are worthwhile to engage with. At the same time, acknowledging some errors may occur early on as the chatbot learns what users want will likely help people become more accepting of a chatbot’s mistakes. Pranav Khadpe, a co-author, told The Wall Street Journal, “You really want to manage the expectations you set before the first interaction.”  ... '

No comments: