Better Conversation in Context
Model Moves Computers Closer to Understanding Human Conversation
Piotr Zelasko at the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can differentiate speech functions in transcripts of conversations generated by language understanding (LU) systems. The model performs dialogue act recognition by identifying words' underlying intent, and assigns them to categories like "Statement," "Question," or "Interruption" in the final transcript. Zelasko sought to ensure his system could understand ordinary conversation, which may help with such tasks as summarization, intent recognition, and detection of key phrases. Zelasko said LU systems no longer need contend with "huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things."
No comments:
Post a Comment