/* ---- Google Analytics Code Below */

Tuesday, April 06, 2021

Explanations and Contexts

An example of the need for explain-ability. Like in a conversation with a human, we could want the option of getting an explanation of a solution.  But the nature of an explanation does often depend on context. Is is for management or an engineer?  Is it for a current set of data or a generalization?    Does it depend on some regulation or special constraints?  Context is often key.  Often occurred in our work. 

Researchers Develop 'Explainable' Algorithm

University of Toronto (Canada), Matthew Tierney, March 31, 2021

An "explainable" artificial intelligence (XAI) algorithm developed by researchers at Canada's University of Toronto (U of T) and LG AI Research was designed to find and fix defects in display screens. XAI addresses issues with the "black box" approach of machine learning strategies, in which the artificial intelligence makes decisions entirely on its own. With XAI's "glass box" approach, XAI algorithms are run simultaneously with traditional algorithms to audit the validity and level of their learning performance, perform debugging, and identify training efficiencies. U of T's Mahesh Sudhakar said LG "had an existing [machine learning] model that identified defective parts in LG products with displays, and our task was to improve the accuracy of the high-resolution heat maps of possible defects while maintaining an acceptable run time." The new XAI algorithm, Semantic Input Sampling for Explanation (SISE), outperformed comparable approaches on industry benchmarks.... ' 

No comments: