Good piece from BAIR: Berkeley Artificial Intelligence Research. Note the use of games to establish, demonstrate and experiment with collaborative interactions. YES, there must be a level of appropriate contextual understanding to make collaboration work. And such collaboration is the best form of assistance. Descriptions and videos of game play in the below link:
Collaborating with Humans Requires Understanding Them
By Rohin Shah and Micah Carroll Berkeley
AI agents have learned to play Dota, StarCraft, and Go, by training to beat an automated system that increases in difficulty as the agent gains skill at the game: in vanilla self-play, the AI agent plays games against itself, while in population-based training, each agent must play against a population of other agents, and the entire population learns to play the game.
This technique has a lot going for it. There is a natural curriculum in difficulty: as the agent improves, the task it faces gets harder, which leads to efficient learning. It doesn’t require any manual design of opponents, or handcrafted features of the environment. And most notably, in all of the games above, the resulting agents have beaten human champions.
The technique has also been used in collaborative settings: OpenAI had one public match where each team was composed of three OpenAI Five agents alongside two human experts, and the For The Win (FTW) agents trained to play Quake were paired with both humans and other agents during evaluation. In the Quake case, humans rated the FTW agents as more collaborative than fellow humans in a participant survey.
However, when we dig into the weeds, we can see that this is not a panacea. In the 2.5 minute discussion after the OpenAI Five cooperative game (see 4:33:05 onwards in the video), we can see that some issues did arise1: ... "
Tuesday, October 22, 2019
Machines Collaborating with Humans
Labels:
AI,
Assistance,
Berkeley,
Collaboration,
Deep Understanding,
games,
learning,
OpenAI
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment