Always an issue with complex models, especially ones that are not very transparent. We always did risk models in parallel.
Managing risk in machine learning models
The O’Reilly Data Show Podcast: Andrew Burt and Steven Touw on how companies can manage models they cannot fully explain.
By Ben Lorica in O'Reilly
Check out Andrew Burt's talk "Beyond Explainability: Regulating Machine Learning In Practice" at the Strata Data Conference in New York, September 11-13, 2018. Hurry—early price ends July 27.
Subscribe to the O'Reilly Data Show Podcast to explore the opportunities and techniques driving big data, data science, and AI. Find us on Stitcher, TuneIn, iTunes, SoundCloud, RSS.
In this episode of the Data Show, I spoke with Andrew Burt, chief privacy officer at Immuta, and Steven Touw, co-founder and CTO of Immuta. Burt recently co-authored an upcoming white paper on managing risk in machine learning models, and I wanted to sit down with them to discuss some of the proposals they put forward to organizations that are deploying machine learning.
Some high-profile examples of models gone awry have raised awareness among companies for the need for better risk management tools and processes. There is now a growing interest in ethics among data scientists, specifically in tools for monitoring bias in machine learning models. In a previous post, I listed some of the key considerations organization should keep in mind as they move models to production, but the upcoming report co-authored by Burt goes far beyond and recommends lines of defense, including a description of key roles that are needed. .... "
Tuesday, June 26, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment