/* ---- Google Analytics Code Below */

Monday, November 07, 2016

Making Machines Explain Themselves

 Do we accept the reasoning of a 'black box'? I  recall our own use of neural networks, and the inability to cheek the underlying reasoning. What does it mean to have a system explain itself?

Making computers explain themselves
New training technique would reveal the basis for machine-learning systems’ decisions.
Larry Hardesty | MIT News Office 
October 27, 2016

In recent years, the best-performing systems in artificial-intelligence research have come courtesy of neural networks, which look for patterns in training data that yield useful predictions or classifications. A neural net might, for instance, be trained to recognize certain objects in digital images or to infer the topics of texts.

But neural nets are black boxes. After training, a network may be very good at classifying data, but even its creators will have no idea why. With visual data, it’s sometimes possible to automate experiments that determine which visual features a neural net is responding to. But text-processing systems tend to be more opaque. .... " 

No comments: