/* ---- Google Analytics Code Below */

Friday, September 04, 2020

ML Models for Everyday Actions

Of course this  is what we would like to get to to create General AI.  Solving and delivering real decisions.  But we are not there yet.  I like the challenge outlined.

Toward an ML Model That Can Reason About Everyday Actions
MIT News
Kim Martineau
August 31, 2020

Researchers from the Massachusetts Institute of Technology (MIT), Columbia University, and IBM have trained a hybrid language-vision machine learning model to recognize abstract concepts in video. The researchers used the WordNet word-meaning database to map how each action-class label in MIT's Multi-Moments in Time and DeepMind's Kinetics datasets relates to the other labels in both datasets. The model was trained on this graph of abstract classes to generate a numerical representation for each video that aligns with word representations of the depicted actions, then combine them into a new set of representations to identify abstractions common to all the videos. When compared with humans performing the same visual reasoning tasks online, the model performed as well as them in many situations. MIT's Aude Oliva said, "A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making."

No comments: