Good piece on the topic. Not too different from teaming with other methods, like analytics But here there may be higher expectations and hints of expected 'autonomy'. I would suggest and add that there should be more embedded risk analysis considered, mostly due to sometimes overblown expectations of such methods. Humans will necessarily always be in the loup.
Teaming Up with Artificial Intelligence By Bennie Mols in CACM
Daniel S. Weld of the University of Washington in Seattle was part of a team that analyzed 20 years of research on the interactions between people and artificial intelligences.
Creating a good artificial intelligence (AI) user experience is not easy. Everyone who uses autocorrect while writing knows that while the system usually does a pretty good job of acthing and correcting errors, it sometimes makes bizarre mistakes. The same is true for the autopilot in a Tesla, but unfortunately the stakes are much higher on the road than when sitting behind a computer.
Daniel S. Weld of the University of Washington in Seattle has done a lot of research on human-AI teams. Last year, he and a group of colleagues from Microsoft proposed 18 generally applicable design guidelines for human-AI interaction, which were validated through multiple rounds of evaluation.
Bennie Mols interviewed Weld about the challenges of building a human-AI dream team:
What makes a human-AI team different from a team of a human and a digital system without AI?
First of all, AI systems are probabilistic: sometimes they get it right, but sometimes they err. Unfortunately, their mistakes are often unpredictable. In contrast, classical computer programs, like spreadsheets, work in a much more predictable way.
Second, AI can behave differently in subtly different contexts. Sometimes the change in context isn't even clear to the human. Google Search might give different auto-suggest results to different people, based on their previous behavior, which was different.
The third important difference is that AI systems can change over time, for example through learning.
How did your research team arrive at the guidelines for human-AI-interaction?
We started by analyzing 20 years of research on human-AI interaction. We did a user evaluation with 20 AI products and 50 practitioners, and we also did expert reviews. This led to 18 guidelines divided over four phases of the human-AI interaction process: the initial phase, this is before the interaction has started; the phase during interaction; the phase after the interaction, in case the AI system made a mistake; and finally, over time. During the last phase, the system might get updates, while humans might evolve their interaction with the system. .... "
Tuesday, June 02, 2020
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment