/* ---- Google Analytics Code Below */

Thursday, January 10, 2019

Interpreting and Securely Using Machine Learning

 Good piece here, which discusses the nature of Trust, Causality, Transferability, Informativeness,

....Yes to that, but ultimately is how you can link, and use the results as part of a current or proposed business process.   Try that first ...

Interpreting Machine Leaning Models: A Myth or Reality?

 Despite the predictive capabilities of supervised machine learning, can we trust the machines? As much as we want the models to be good, we also want them to be interpretable. Yet, the task of interpretation often remains vague.

Despite the proliferation of machine learning into our daily lives ranging from finance to justice, a majority of the users find their models difficult to understand. This lack of a commonly agreed upon definition or the ill-definition of the interpretability means that rather than being a monolithic concept, interpretability embeds various related concepts.

Interpretability is mostly used in the field of supervised learning in comparison to other fields of machine learning such as reinforcement or interactive learning. Existing research studies approach interpretability as a means to establish trust. Yet, it needs to be clarified whether trust refers to the robustness of a model’s performance or to some other properties.

Viewing interpretability simply as a low-level mechanistic understanding of models might be problematic. Despite the capability of machines of discovering causal structure in data, they still are far from being perfect for offering relevant matches for the tasks they are supposed to solve in the real life. One reason for this failure might be the oversimplification of optimization goals so that they fail to fulfill more complicates real-life goals. Another reason might be the unrepresentativeness of the training data of the related deployment ecosystem. Besides, given a model’s complexity, all of parameters, algorithms, factors of human agency need to be taken into account.

Whenever there is a gap between the goals of supervised learning and the costs of a real world deployment setting, demand for interpretability would emerge. Not every real life goal can be coded as simple functions. To give a specific example, an algorithm designed to make hiring decisions would not be able to optimize all of productivity and ethics. So, a formal model that would work within the context of a real-life environment would be a struggle. In order to overcome this struggle, here are some aspects of interpretability to be taken into account: ...  " 

No comments: