/* ---- Google Analytics Code Below */

Monday, September 30, 2019

Robotic Reliability vs Reasoning with Transparency

We came up with some similar conclusions.  That observed reliability was typically much more important that people thought, and it was best to solve that problem before aiming at advanced reasoning in a robot, or robotic process.    This further explores the value of transparency in an embedded reasoning  process, especially in human-robot teams.  All essential in a future of such cooperation.

When It Comes to Robots, Reliability May Matter More Than Reasoning
U.S. Army Research Laboratory    September 25, 2019

A study by U.S. Army Research Laboratory (ARL) and University of Central Florida found that human confidence in robots decreases after a robot makes a mistake, even when it is transparent with its reasoning process. The researchers explored human-agent teaming to define how the transparency of the agents, such as robots, unmanned vehicles, or software agents, impacts human trust, task performance, workload, and agent perception. Subjects observing a robot making a mistake downgraded its reliability, even when it did not make any subsequent mistakes. Boosting agent transparency improved participants' trust in the robot, but only when the robot was collecting or filtering data. ARL's Julia Wright said, "Understanding how the robot's behavior influences their human teammates is crucial to the development of effective human-robot teams, as well as the design of interfaces and communication methods between team members."

No comments: