/* ---- Google Analytics Code Below */

Tuesday, April 05, 2022

Explainable AI Risks

Excerpt From  Opinion Section of ACM.  Fascinating piece here.  

Explainable AI

By Veda C. Storey, Roman Lukyanenko, Wolfgang Maass, Jeffrey Parsons

Communications of the ACM, April 2022, Vol. 65 No. 4, Pages 27-29   10.1145/3490699

Advances in AI, especially based on machine learning, have provided a powerful way to extract useful patterns from large, heterogeneous data sources. The rise in massive amounts of data, coupled with powerful computing capabilities, makes it possible to tackle previously intractable real-world problems. Medicine, business, government, and science are rapidly automating decisions and processes using machine learning. Unlike traditional AI approaches based on explicit rules expressing domain knowledge, machine learning often lacks explicit human-understandable specification of the rules producing model outputs. With growing reliance on automated decisions, an overriding concern is understanding the process by which "black box" AI techniques make decisions. This is known as the problem of explainable AI.2 However, opening the black box may lead to unexpected consequences, as when opening Pandora's Box.

Black Box of Machine Learning

Advanced machine learning algorithms, such as deep learning neural networks or support vector machines, are not easily understood by humans. Their power and success stems from the ability to generate highly complex decision models built upon hundreds of iterations over training data.5 The performance of these models is dependent on many factors, including the availability and quality of training data and skills and domain expertise of data scientists. The complexity of machine learning models may be so great that even data scientists struggle to understand the underlying algorithms. For example, deep learning was used in the program that famously beat the reigning Go world champion,6 yet the data scientists responsible could not always understand how or why the algorithms performed as they did.

Opening the black box involves providing human-understandable explanations for why a model reaches a decision and how it works.

Opening the black box involves providing human-understandable explanations for why a model reaches a decision and how it works. The motivation is to ensure decision making is justified, fair, and ethical, and to treat the "right to explanation" as a basic human right.7 Notably, the European Union's General Data Protection Regulation requires companies to provide "meaningful information" about the logic in their programs (Article 13.2(f)). The goal is to ensure the rules, data, assumptions, and development processes underlying a machine learning model are understandable, transparent, and accessible to as many people as possible or necessary, including managers, users, customers, auditors, and citizens.

The explainable AI challenge usually focuses on how to open the black box of AI; for example, by considering how various features contribute to the output of a model or by using counter-factual explanations that measure the extent to which a model output would change if a feature were missing.7 We pose a seldom-asked, but vital, question: Once a mechanism is in place to open the black box, how do we, as a society, prepare to deal with the consequences of exposing the reasoning that generates the output from AI models?

Pandora's Box of Explainable AI

In Greek mythology, Pandora's Box refers to a container of evils that are unleashed and cannot be contained once the box is opened. We employ this analogy because, although opening the black box of AI may shed transparency on the machine learning model, it does not mean the processes underlying the model are free of problems. As in Pandora's Box, these problems are revealed once we move from a black box to a white box of AI. Machine learning explainability is a worthy goal; however, we must prepare for the outcome. Opening the black box can metaphorically open a Pandora's Box, as shown in the accompanying figure. .... '

No comments: