Very good piece here, what is the difference and essence of having voice interfaces? Or even interfaces that are partially voice? This is not necessarily AI but it is closer to cognitive than we had before. Remember too, that voice more strongly implies a conversation is occurring, which embeds, context, memory and modeling who/what you are talking to. None of this has been perfected. Clear from my interaction with Echos and Google Home during the past few years.
“Alexa, Understand Me” In Technology Review by George Anders
Voice-based AI devices aren’t just jukeboxes with attitude. They could become the primary way we interact with our machines.
On August 31, 2012, four Amazon engineers filed the fundamental patent for what ultimately became Alexa, an artificial intelligence system designed to engage with one of the world’s biggest and most tangled data sets: human speech. The engineers needed just 11 words and a simple diagram to describe how it would work. A male user in a quiet room says: “Please play ‘Let It Be,’ by the Beatles.” A small tabletop machine replies: “No problem, John,” and begins playing the requested song.
From that modest start, voice-based AI for the home has become a big business for Amazon and, increasingly, a strategic battleground with its technology rivals. Google, Apple, Samsung, and Microsoft are each putting thousands of researchers and business specialists to work trying to create irresistible versions of easy-to-use devices that we can talk with. “Until now, all of us have bent to accommodate tech, in terms of typing, tapping, or swiping. Now the new user interfaces are bending to us,” observes Ahmed Bouzid, the chief executive officer of Witlingo, which builds voice-driven apps of all sorts for banks, universities, law firms, and others. .... "
Wednesday, August 09, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment