Nice technical piece which points at some of the 'art' of deep learning. These are the kinds of near intuitive things that would have to be embedded in completely autonomous systems. Sometimes we crowd-sourced these methods with multiple practitioners, when we sought other measures of variability. Also, you are not always looking for optimization, given other forms of model variability. So when I see the word optimize used I am cautious, since it always exists only in some context. In the article, some good graphical systems provide intuitive directions.
An introduction to high-dimensional hyper-parameter tuning
Best practices for optimizing ML models By Thalles Silva
If you ever struggled with tuning Machine Learning (ML) models, you are reading the right piece.
Hyper-parameter tuning refers to the problem of finding an optimal set of parameter values for a learning algorithm.
Usually, the process of choosing these values is a time-consuming task.
Even for simple algorithms like Linear Regression, finding the best set for the hyper-parameters can be tough. With Deep Learning, things get even worse.
Some of the parameters to tune when optimizing neural nets (NNs) include: ... "
Thursday, January 24, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment