/* ---- Google Analytics Code Below */

Saturday, October 24, 2020

Benchmarking Voice Understanding

 Good points made.   I have been using Google voice assistant versus Amazon Alexa for a few years now.  Only now and then using Siri.   I see more 'balking' by Alexa (that is she does not answer coherently at all)  than Google assistant, but then more 'understanding'.  Alexa is in general more 'human' in conversation.  After that I don't see adequate contextual understanding from either in general.  It all depends on how important and risky the dependent decisions are.  Google does a good job of multilingual understanding when properly set up.   Here voicebot.ai has taken a broader look that is worth looking at.   Neither in my opinion can understand and answer that I would call 'complex questions'.

Understanding Is Crucial for Voice and AI: Testing and Training are Key To Monitoring and Improving It      By John Kelvie in Voicebot.ai

BENCHMARKING VOICE ASSISTANTS

How well does your voice assistant understand and answer complex questions? It is often said, making complex things simple is the hardest task in programming, as well as the highest aim for any software creator. The same holds true for building for voice. And the key to ensuring an effortlessly simple experience for voice is the accuracy of understanding, achieved through testing and training.

To dig deeper into the process of testing and training for accuracy, Bespoken undertook a benchmark to test Amazon Echo Show 5, Apple iPad Mini, Google Nest Home Hub. This article explores what we learned through this research and the implications for the larger voice industry based on other products and services.

For the benchmark, we took a set of nearly 1,000 questions from the ComQA dataset and ran them against the three most popular voice assistants: Amazon Alexa, Apple Siri, and Google Assistant. The results were impressive – these questions were not easy, and the assistants handled them often with aplomb:  ... "

No comments: