One of the most important issues in data science, interpreting models, especially for particular contextual use. And part of the context is always metadata driven. ...
In O'Reilly:
Interpreting predictive models with Skater: Unboxing model opacity
A deep dive into model interpretation as a theoretical concept and a high-level overview of Skater.
By Pramit Choudhary ....
Particularly like these general overview statements of interpretability:
" ... Ideally, you should be able to query the model to understand the what, why, and how of its algorithmic decisions:
What information can the model provide to avoid prediction errors? You should be able to query and understand latent variable interactions in order to evaluate and understand, in a timely manner, what features are driving predictions. This will ensure the fairness of the model.
Why did the model behave in a certain way? You should be able to identify and validate the relevant variables driving the model’s outputs. Doing so will allow you to trust in the reliability of the predictive system, even in unforeseen circumstances. This diagnosis will ensure accountability and safety of the model.
How can we trust the predictions made by the model? You should be able to validate any given data point to demonstrate to business stakeholders and peers that the model works as expected. This will ensure transparency of the model. ... "
Wednesday, March 28, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment