U.S. Army Scientists Improve Human-Agent Teaming by Making AI Agents More Transparent
ARL News
Researchers at the U.S. Army Research Laboratory (ARL) have developed ways to improve collaboration between humans and artificially intelligent agents. The researchers say they have enhanced agent transparency, which refers to a robot, unmanned vehicle, or software agent's ability to communicate to humans its intent, performance, future plans, and reasoning process. In 2016, the U.S. Defense Science Board identified six barriers to human trust in autonomous systems, including low observability, predictability, directability, and low mutual understanding of common goals. The ARL researchers addressed these issues by developing the Situation awareness-based Agent Transparency (SAT) model, and measured its efficacy on human-agent team performance in human factors studies. One project, IMPACT, focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators' decision-making. Meanwhile, the Autonomous Squad Member project involved a small ground robot interacting and communicating with an infantry squad .... "
Saturday, February 03, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment