Much akin to our own attempts at putting people and/or humans in a conversational path, and looking for signals in the interaction to determine when you should shift from one kind of agent to the other.
I assume almost always bailing to a human agent. Very natural idea. So whats the beef? Are you fooling 'some of the people some of the time', and detecting if they get alarmed or confused or angry if they detect they are fooled?
I would mainly be alarmed that there would be many more calls, like we have been seeing lately, because robocalls are cheaper to deliver than using people. A 'concierge model' like we used used a clearly robot agent, which then passed people off to people or a chatbot.
In The Verge:
A quarter of Google Duplex calls are actually placed by humans
If the AI sounds eerily human, well, that’s because they just might be By Natt Garun@nattgarun
Earlier this month, I shadowed several restaurants throughout New York and talked to restaurant employees across the US to see how they’ve received Google Duplex, the AI that makes life-like calls for reservations on your behalf. Most agreed that the AI sounded unmistakably human — and according to Google’s response to reporting by The New York Times, there’s a 25 percent chance that they were.
Google says that a quarter of Duplex calls start with human callers, and 15 percent start with the AI and are later intervened by a person from the Duplex call center. The company told The New York Times that it uses a variety of signals to decide whether a call should be placed by a human or a robot, “like if the company is unsure of whether the business takes reservations, or if the user of the assistant might be a spammer.” ... '
Monday, June 10, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment