/* ---- Google Analytics Code Below */

Tuesday, May 25, 2021

What should a Robot do when it Cannot Trust the Model it was Trained on?

Also a thing we expect of useful 'intelligence'... knowing its limitations.  Or do we?  How is this different?  

What should a robot do when it cannot trust the model it was trained on?

Helping Robots Learn What They Can and Can't Do in New Situations

The Michigan Engineer News Center, Dan Newman, May 19, 2021

University of Michigan researchers have developed a method of helping robots to predict when the model on which they were trained is unreliable, and to learn from interacting with the environment. Their approach involved creating a simple model of a rope's dynamics while moving it around an open space, adding obstacles, creating a classifier that learned when the model was reliable without learning how the rope interacted with the objects, and including recovery steps for when the classifier determined the model was unreliable. The researchers found their approach was successful 84% of the time, versus 18% for a full dynamics model, which aims to incorporate all possible scenarios. The approach also was successful in two real-world settings that involved grabbing a phone charging cable, and manipulating hoses and straps under a car hood. Michigan's Dmitry Berenson said, "This method can allow robots to generalize their knowledge to new situations that they have never encountered before."  ... ' 

No comments: