Further, when I see this I always want to add: Explain Yourself in Context! Its always needs to be something more than just simply a result. I like the comment by Prof Ben Shneiderman, of whom I am a fan ... its not close to consciousness, that is still far away, if ever.
AI, Explain Yourself By Don Monroe
Communications of the ACM, November 2018, Vol. 61 No. 11, Pages 11-13
10.1145/3276742
Artificial Intelligence (AI) systems are taking over a vast array of tasks that previously depended on human expertise and judgment. Often, however, the "reasoning" behind their actions is unclear, and can produce surprising errors or reinforce biased processes. One way to address this issue is to make AI "explainable" to humans—for example, designers who can improve it or let users better know when to trust it. Although the best styles of explanation for different purposes are still being studied, they will profoundly shape how future AI is used.
Some explainable AI, or XAI, has long been familiar, as part of online recommender systems: book purchasers or movie viewers see suggestions for additional selections described as having certain similar attributes, or being chosen by similar users. The stakes are low, however, and occasional misfires are easily ignored, with or without these explanations.
Nonetheless, the choices made by these and other AI systems sometimes defy common sense, showing our faith in them is often an unjustified projection of our own thinking. "The implicit notion that AI somehow is another form of consciousness is very disturbing to me," said Ben Shneiderman, a Distinguished University Professor in the department of computer science and founding director of the Human-Computer Interaction Laboratory at the University of Maryland. .... "
Friday, November 09, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment