/* ---- Google Analytics Code Below */

Wednesday, May 06, 2015

Deep Learning with Structure and Interpretation

A dozen years ago we were using neural nets to capture aspects of consumer behavior and interpret the results to apply to marketing decisions.  Not what is today called 'Deep Learning', but uses the same some of the same math tools. One of the primary problems was that it was hard to interpret the results to determine their validity.  The problem has not gone away with new applications, but is now being addressed:
 
In KDNuggets: 
A big problem with Deep Learning networks is that their internal representation lacks interpretability. At the upcoming #DeepLearning Summit, Charlie Tang, a student of Geoff Hinton, will present an approach to address this concern - here is a preview ... " 

No comments: