/* ---- Google Analytics Code Below */

Monday, May 30, 2022

Quest for Explainable AI

Yet another explanation, reasonably good intro.

The quest for explainable AI  Arthur Cole  @acole602  in Venturebeat

Artificial intelligence (AI) is highly effective at parsing extreme volumes of data and making decisions based on information that is beyond the limits of human comprehension. But it suffers from one serious flaw: it cannot explain how it arrives at the conclusions it presents, at least, not in a way that most people can understand.

This “black box” characteristic is starting to throw some serious kinks in the applications that AI is empowering, particularly in medical, financial and other critical fields, where the “why” of any particular action is often more important than the “what.”

A peek under the hood

This is leading to a new field of study called explainable AI (XAI), which seeks to infuse AI algorithms with enough transparency so users outside the realm of data scientists and programmers can double-check their AI’s logic to make sure it is operating within the bounds of acceptable reasoning, bias and other factors. 

As tech writer Scott Clark noted on CMSWire recently, explainable AI provides necessary insight into the decision-making process to allow users to understand why it is behaving the way it is. In this way, organizations will be able to identify flaws in its data models, which ultimately leads to enhanced predictive capabilities and deeper insight into what works and what doesn’t with AI-powered applications.

The key element in XAI is trust. Without that, doubt will persist within any action or decision an AI model generates and this increases the risk of deployment into production environments where AI is supposed to bring true value to the enterprise.

According to the National Institute of Standards and Technology, explainable AI should be built around four principles:

Explanation – the ability to provide evidence, support or reasoning for each output;

Meaningfulness – the ability to convey explanations in ways that users can understand;

Accuracy – the ability to explain not just why a decision was made, but how it was made and;

Knowledge Limits – the ability to determine when its conclusions are not reliable because they fall beyond the limits of its design.

While these principles can be used to guide the development and training of intelligent algorithms, they are also intended to guide human understanding of what explainable means when applied to what is essentially a mathematical construct.

Buyer beware of explainable AI

The key problem with XAI currently, according to Fortune’s Jeremy Kahn, is that it has already become a marketing buzzword to push platforms out the door rather than a true product designation developed under any reasonable set of standards. ..... ' 

No comments: