/* ---- Google Analytics Code Below */

Monday, March 15, 2021

Tech Firms Train Voice Assistants to Understand Atypical Speech

Summary piece on the topic.    Fundamental aspect of accessibility, good to see that many of the big players are examining.  Recall our own enterprise effort quickly ran into this. 

Tech Firms Train Voice Assistants to Understand Atypical Speech

Voice assistants like Alexa and Siri often can’t understand people with dysarthria or a stutter; their creators say that may change

Amazon recently announced a tie-up with Voiceitt, a startup that lets people with speech impairments train an algorithm to recognize their vocal patterns.

By Katie Deighton   Feb. 24, 2021 12:00 pm ET

Dagmar Munn and her husband purchased a smart speaker from Amazon.com Inc. for their home in Green Valley, Ariz. in 2017, seven years after Ms. Munn was diagnosed with amyotrophic lateral sclerosis, the motor neuron disease more commonly referred to as ALS.

At first the speaker’s voice assistant, Alexa, could understand what Ms. Munn was saying. But as her condition worsened and her speech grew slower and more slurred, she found herself unable to communicate with the voice technology.

“I’m not fast enough for it,” Ms. Munn said. “If I want to say something like ‘Alexa, tell me the news,’ it will shut down before I finish asking.”

Ms. Munn can’t interact with voice assistants such as Alexa because the technology hasn’t been trained to understand people with dysarthria, a speech disorder caused by weakening speech muscles. People with a stutter or nonstandard speech caused by hearing loss or mouth cancer can also struggle to be understood by voice assistants.

Approximately 7.5 million people in the U.S. have trouble using their voices, according to the National Institute on Deafness and Other Communication Disorders. Julie Cattiau, a product manager in Google’s artificial intelligence team, said that group is at risk of being left behind by voice-recognition technology.

No comments: