/* ---- Google Analytics Code Below */

Sunday, November 18, 2018

Use Weight Regularization to Reduce Overfitting of Deep Learning Models

Overfitting is a classic problem with all models.  It means you are finding the solution to a particular set of data, rather than a generalized problem.  This should be found afterward in testing against new data,  but can be dangerously misleading.   Jason Brownlee discusses approaches to reduce overfitting.   Follow Jason, lots of good nuggets.

Use Weight Regularization to Reduce Overfitting of Deep Learning Models   by Jason Brownlee  in Better Deep Learning

Neural networks learn a set of weights that best map inputs to outputs.
A network with large network weights can be a sign of an unstable network where small changes in the input can lead to large changes in the output. This can be a sign that the network has overfit the training dataset and will likely perform poorly when making predictions on new data.

A solution to this problem is to update the learning algorithm to encourage the network to keep the weights small. This is called weight regularization and it can be used as a general technique to reduce overfitting of the training dataset and improve the generalization of the model.

In this post, you will discover weight regularization as an approach to reduce overfitting for neural networks. .... "

No comments: