Well covered in ACM pieces of late, useful takes.
ACM PRACTICE
Interpretable Machine Learning: Moving from Mythos to Diagnostics
By Valerie Chen, Jeffrey Li, Joon Sik Kim, Gregory Plumb, Ameet Talwalkar
Communications of the ACM, August 2022, Vol. 65 No. 8, Pages 43-50 10.1145/3546036
The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of interpretable machine learning (IML) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.7,10,17
Yet despite the flurry of IML methodological development over the past several years, a stark disconnect characterizes the current overall approach. As shown in Figure 1, IML researchers develop methods that typically optimize for diverse but narrow technical objectives, yet their claimed use cases for consumers remain broad and often underspecified. Echoing similar critiques about the field,17 it has thus remained difficult to evaluate these claims sufficiently and to translate methodological advances into widespread practical impact. .... '
No comments:
Post a Comment