Have written about this, and in our AI days, dealt with it many times in real applications. Good overview with lots of useful references. Needs more thought about the use of inherent models of risk.
In KDNuggets:
When something goes wrong, as it inevitably does, it can be a daunting task discovering the behavior that caused an event that is locked away inside a black box where discoverability is virtually impossible.
By Colin Lewis (Robotenomics) and Dagmar Monett (Berlin School of Economics and Law).
The black box in aviation, otherwise known as a flight data recorder, is an extremely secure device designed to provide researchers or investigators with highly factual information about any anomalies that may have led to incidents or mishaps during a flight.
The black box in Artificial Intelligence (AI) or Machine Learning programs1 has taken on the opposite meaning. The latest approach in Machine Learning, where there have been ‘important empirical successes,’2 is Deep Learning, yet there are significant concerns about transparency. .... "
Sunday, April 30, 2017
Thinking about AI Transparency and Risk
Labels:
AI,
disco,
Governance,
Machine Learning,
risk,
Transparency
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment