/* ---- Google Analytics Code Below */

Wednesday, May 02, 2018

Sequencing Problems for Natural Language Processing

Good piece by William Vorhies

Temporal Convolutional Nets (TCNs) Take Over from RNNs for NLP Predictions   Posted by William Vorhies in DSC

Summary: Our starting assumption that sequence problems (language, speech, and others) are the natural domain of RNNs is being challenged.  Temporal Convolutional Nets (TCNs) which are our workhorse CNNs with a few new features are outperforming RNNs on major applications today.  Looks like RNNs may well be history.

It’s only been since 2014 or 2015 when our DNN-powered applications passed the 95% accuracy point on text and speech recognition allowing for whole generations of chatbots, personal assistants, and instant translators.

Convolutional Neural Nets (CNNs) are the acknowledged workhorse of image and video recognition while Recurrent Neural Nets (RNNs) became the same for all things language.

One of the key differences is that CNNs can recognize features in static images (or video when considered one frame at a time) while RNNs excelled at text and speech which were recognized as sequence or time-dependent problems.  That is where the next predicted character or word or phrase depends on those that came before (left-to-right) introducing the concept of time and therefore sequence.

Actually RNNs are good at all types of sequence problems, including speech/text recognition, language-to-language translation, handwriting recognition, sequence data analysis (forecasting), and even automatic code generation in many different configurations. .... " 

No comments: