We called this 'artificial neural nets' (ANN) and we used them actively, but
shallowly. These have been deepened to be called 'Deep Learning'. And very successful at categorizing things like images. Our nets were much shallower, a handful of layers at most. These worked for things we had in the the past used pretty standard and long known statistical methods.
So now
William Vorhies writes in DSC about
Beyond Deep Learning – 3rd Generation Neural Nets ...
Summary: If Deep Learning is powered by 2nd generation neural nets. What will the 3rd generation look like? What new capabilities does that imply and when will it get here? ... '
Nicely done, he mentions some of the limitations of neural nets and their use, which were the same things that we experienced with our 'nets'. These are very data intense methods, required tagged data for supervised learning and mostly did classification. Though the last could easily be stretched. They were trained with much data, then used to classify and test new examples. It was easy to determine their accuracy and try again after engineering some parameters. Art and Science.
And then he takes it to what might be next. Good read without too much technology depth. And insightful for those with a taste of the idea already. I am about to take a deeper dive.
No comments:
Post a Comment