Would have thought this would be very difficult to do, given the required syntactic accuracy required for coding. So a fascinating case study and demonstration video:
Code Talkers By Neil Savage
Communications of the ACM, May 2019, Vol. 62 No. 5, Pages 18-19 10.1145/3317681
When Tavis Rudd decided to build a system that would allow him to write computer code using his voice, he was driven by necessity.
In 2010, he tore his rotator cuffwhile rock-climbing, forcing him to quit climbing while the injury healed. Rather than sitting idle, he poured more of his energy into his work as a self-employed computer programmer. "I'd get in the zone and just go for hours," he says. Whether it was the increased time pounding away at a keyboard or the lack of other exercise, Rudd eventually developed a repetitive strain injury (RSI) that caused his outer fingers to go numb and cold, leaving him unable to type/code without pain.
Worried that he would not be able to do his job, Rudd turned to Dragon Naturally Speaking voice recognition software to see if that could help. He quickly discovered that he could insert commands into Dragon using the programming language Python, and that he could use the Python-based application programming interface (API) Dragonfly to create lists of words and link them to specific actions he wanted Dragon to perform.
So he set about creating such a list, known as a grammar, of words that would cause a text editor such as Emacs to take certain actions—insert or delete characters, add a bracket, move the cursor up some number of lines. He created this grammar with strange words, such as ak or par, to avoid confusing the speech recognition software with common English words and to keep the number of syllables per command down to one or two, so programming this way would be speedy. .... "
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment