/* ---- Google Analytics Code Below */

Thursday, July 12, 2018

Interpretability Testing Examined

Testing machine learning interpretability techniques  In O'Reilly

By Patrick Hall,Navdeep Gill,Lingyao Meng 

The importance of testing your tools, using multiple tools, and seeking consistency across various interpretability techniques.

This post contains excerpts from the report “An Introduction to Machine Learning Interpretability,”    ... Read the full report on O'Reilly's learning platform.

Interpreting machine learning models is a pretty hot topic in data science circles right now. Machine learning models need to be interpretable to enable wider adoption of advanced predictive modeling techniques, to prevent socially discriminatory predictions, to protect against malicious hacking of decisioning systems, and simply because machine learning models affect our work and our lives. Like others in the applied machine learning field, my colleagues and I at H2O.ai have been developing machine learning interpretability software for the past 18 months or so.

We were able to give a summary of applied concerns in the interpretability field in an O’Reilly report earlier this year. What follows here are excerpts of that report, plus some new, bonus material. This post will focus on a few important, but seemingly less often discussed, interpretability issues: the approximate nature of machine learning interpretability techniques, and how to test model explanations. ... "

No comments: