Technical, primer for explainability methods
Explainable AI (XAI) Methods Part 1 — Partial Dependence Plot (PDP)
Primer on Partial Dependence Plot, its advantages and disadvantages, how to make use and interpret it
By Seungjun (Josh) Kim in TowardsDataScience
Explainable Machine Learning (XAI)
Explainable Machine Learning (XAI) refers to efforts to make sure that artificial intelligence programs are transparent in their purposes and how they work. [1] It has been one of the hottest keywords in the Data Science and Artificial Intelligence community in the recent few years. This is understandable because a lot of SOTA (State of the Art) models are black boxes which are difficult to interpret or explain despite their top-notch predictive power and performance. For many organizations and corporations, several percentage increase in classification accuracy may not be as important as answers to questions like “how does feature A affect the outcome?” This is why XAI has been receiving more spotlight as it greatly aids decision making and performing causal inference.
In the next series of posts, I will cover various XAI methodologies that are in wide use nowadays in the Data Science community. The first method I will cover is the Partial Dependence Plot, PDP, in short.
Partial Dependence Plot (PDP)
Partial Dependence (PD) is a global and model-agnostic XAI method. Global methods give a comprehensive explanation on the entire data set, describing the impact of feature(s) on the target variable in the context of the overall data. Local methods, on the other hand, describes the impact of feature(s) on an observation level. Model-agnostic means that the method can be applied to any algorithm or model.
Simply put, PDP shows the marginal effect or contribution of individual feature(s) to the predictive value of your black box model [2]. For a more formal definition, The partial dependence function for regression can be defined as: ... '
No comments:
Post a Comment