/* ---- Google Analytics Code Below */

Tuesday, December 11, 2018

Finishing Your Sentences with Common Sense

Yes, its one way to tailor common sense driven sentences.   Here some snippets of work underway in the space, starting with a NYT article, then links to technical details.  The overall challenge for common sense natural language understanding research at this level is well described, but the solutions are technical:

Finally, a Machine That Can Finish Your Sentence

Completing someone else’s thought is not an easy trick for A.I. But new systems are starting to crack the code of natural language.    By Cade Metz  in the WSJ.   ... "

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova

We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications.  ..."

And of course,   a data challenge for this problem,with early success results:

A Large-Scale Adversarial Dataset for Grounded Commonsense Inference

Rowan Zellers. Yonatan Bisk, Roy Schwartz, Yejin Choi
Paul G. Allen School of Computer Science & Engineering, University of Washington
Allen Institute for Artificial Intelligence

Further description of the data challenge in Swag:

Given a partial description like “she opened the hood of the car,” humans can reason about the situation and anticipate what might come, next (“then, she examined the engine”). In this paper, we introduce the task of grounded commonsense inference, unifying natural language inference and commonsense reasoning.

We present Swag, a new dataset with 113k multiple choice questions about a rich spectrum of grounded situations. To address the, recurring challenges of the annotation artifacts and human biases found in many existing datasets, we propose Adversarial Filtering (AF), a novel procedure that constructs a de-biased dataset by iteratively training an ensemble of stylistic classifiers, and using them to filter the data. To account for the aggressive adversarial filtering, we use state-of-theart language models to massively oversample a diverse set of potential counterfactuals.

Empirical results demonstrate that while humans can solve the resulting inference problems with high accuracy (88%), various competitive models struggle on our task. We provide comprehensive analysis that indicates significant opportunities for future research.  ... "

No comments: