/* ---- Google Analytics Code Below */

Sunday, June 30, 2019

Is an Utterance Relevant to a Conversation?

Everything has a context  (and relevant metadata).   And in any conversation we need to test for relevancy.  You say something and we are quick to say, or think:  That is irrelevant.   A skill or a machine needs to do the same, at least for efficiency.     This piece discusses how its done in
Amazon Alexa Skill development.  Starts with simple points, and then gets technical.

Learning to Recognize the Irrelevant By Young-Bum Kim  

A central task of natural-language-understanding systems, like the ones that power Alexa, is domain classification, or determining the general subject of a user’s utterances. Voice services must make finer-grained determinations, too, such as the particular actions that a customer wants executed. But domain classification makes those determinations much more efficient, by narrowing the range of possible interpretations.

Sometimes, though, an Alexa customer might say something that doesn’t fit into any domain. It may be an honest request for a service that doesn’t exist yet, or it might be a case of the customer’s thinking out loud: “Oh wait, that’s not what I wanted.”

If a natural-language-understanding (NLU) system tries to assign a domain to an out-of-domain utterance, the result is likely to be a nonsensical response. Worse, if the NLU system is tracking the conversation, so that it can use contextual information to improve performance, the interpolation of an irrelevant domain can disrupt its sequence of inferences. Getting back on track can be both time consuming and, for the user, annoying.

Out_of_domain.jpgOne possible solution is to train a second classifier that sits on top of the domain classifier and just tries to recognize out-of-domain utterances. But this looks like an intrinsically inefficient arrangement. Data features that help a domain classifier recognize utterances that fall within a particular domain are also likely to help an out-of-domain classifier recognize utterances that fall outside it.

In a paper we’re presenting at this year’s Interspeech, my colleague Joo-Kyung Kim and I describe a neural network that we trained simultaneously to recognize in-domain and out-of-domain utterances. By using a training mechanism that iteratively attempts to optimize the trade-off between those two goals, we significantly improve on the performance of a system that features a separately trained domain classifier and out-of-domain classifier.

For purposes of comparison, we set a series of performance targets for out-of-domain (OOD) classification, which both our system and the baseline system had to meet. For each OOD target, we then measured the accuracy of domain classification. On average, our system improved domain classification accuracy by about 6% for a given OOD target.

As inputs to our system, we use both word-level and character-level information. At the word level, we use a standard set of “embeddings,” which represent words as points in a 100-dimensional space, such that words with similar meanings are grouped together. We also feed the words’ constituent characters to a network that, during training, learns its own character-level embeddings, which identify substrings of characters useful for predictive purposes.

The character embeddings for each word in the input pass to a bidirectional long short-term memory (bi-LSTM) network. LSTM networks are common in natural-language processing because they factor in the order in which data are received, which is useful in analyzing both strings of characters and strings of words. Bi-LSTM models consider data sequences both forward and backward.  .... " 

No comments: