/* ---- Google Analytics Code Below */

Sunday, April 01, 2018

Seeking Interpret-ability and Explanation as Components of our Brain

Modeling the apparent structure and resulting operation of the brain is difficult ....   We have lots of neurons gathering data,  and interacting in ways we do not fully understand.   And that results in high level cognitive concepts, like language or consciousness.   This piece looks at these interactions and seeks to produce some interpretive models based on things we know how to model well enough today, interpret and tag visual scenes.  But that is yet a small and simplistic portion of what the Brain does.  What does it mean for deep intelligence?  Like being given a mass of Lego blocks and being asked to model a city without a map.  You can see the engineers starting to sweat.

The Building Blocks of Interpretability

Interpretability techniques are normally studied in isolation.
We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space. .... 

Researchers from Google and CMU explore ... 

Chris Olah, Google Brain
Arvind Satyanarayan. Google Brain  .... 

With the growing success of neural networks, there is a corresponding need to be able to explain their decisions — including building confidence about how they will behave in the real-world, detecting model bias, and for scientific curiosity. In order to do so, we need to both construct deep abstractions and reify (or instantiate) them in rich interfaces [1] . With a few exceptions [2, 3, 4] , existing work on interpretability fails to do these in concert.

The machine learning community has primarily focused on developing powerful methods, such as feature visualization [5, 6, 7, 8, 9, 10] , attribution [7, 11, 12, 13, 14, 15, 16, 17] , and dimensionality reduction [18] , for reasoning about neural networks. However, these techniques have been studied as isolated threads of research, and the corresponding work of reifying them has been neglected. On the other hand, the human-computer interaction community has begun to explore rich user interfaces for neural networks [19, 20, 21] , but they have not yet engaged deeply with these abstractions. To the extent these abstractions have been used, it has been in fairly standard ways. As a result, we have been left with impoverished interfaces (e.g., saliency maps or correlating abstract neurons) that leave a lot of value on the table. Worse, many interpretability techniques have not been fully actualized into abstractions because there has not been pressure to make them generalizable or composable. .... 

No comments: