/* ---- Google Analytics Code Below */

Tuesday, October 22, 2019

New Tools Annonced for Alexa NLU Dev

Impressed by the number of new capabilities being rolled out for skills delivery in Alexa.  Yet I still see quite a few foundational problems with natural language understanding on Alexa, which I use at the skill and foundation level every day.   Makes for a shaky impression during demonstrations.  Does this mean they have hit some fundamental limitation of technology for now?

Build, Test, and Tune Your Skills with Three New Tools  (Full detail at link) 
October 09, 2019
By Leo Ohannesian

We’re excited to announce the General Availability of two tools which focus on your voice model’s accuracy: Natural Language Understanding (NLU) Evaluation Tool and Utterance Conflict Detection. We are also excited to announce that you will now be able to build your own quality and usage reporting with the Get Metrics API, now in Beta. These tools help complete the suite of Alexa skill testing and analytics tools that aide in creating and validating your voice model prior to publishing your skill, detect possible issues when your skill is live, and help you refine your skill over time.

The NLU Evaluation Tool helps you batch test utterances and compare how they are interpreted by your skill’s NLU model against your expectations. The tool has three use cases:

Prevent overtraining NLU models: overtraining your NLU model with too many sample utterances and slot values can reduce accuracy. Instead of adding exhaustive sample utterances to your interaction model, you can now run NLU Evaluations with utterances you expect users to say. If any utterance resolves to the wrong intent and/or slot, you can improve accuracy of your skill’s NLU model by only adding those utterances as new training data (by creating new sample utterances and/or slots).

Regression tests - you can create regression tests and run them after adding new features to your skills to ensure your customer experience stays intact.

Accuracy measurements - you can measure the accuracy of your skill’s NLU model by running an NLU Evaluation with anonymized frequent live utterances surfaced in Intent History (production data), and then measure the impact on accuracy for any changes you make to their NLU model.

Utterance Conflict Detection helps you detect utterances which are accidentally mapped to multiple intents, which reduces accuracy of your Alexa skill’s Natural Language Understanding (NLU) model. This tool is automatically run on each model build and can be used prior to publishing the first version of your skill or as you add intents and slots over time - preventing you from building models with unintended conflicts.  ..... "

No comments: