I have been often been involved with this question for many kinds of analytic methods. 100 data points is too few for something that's expected to precisely predictive, but how much is enough? The question itself is not precisely answerable. Its always better to have more, especially if you expect some drift in the answer, or if you need to repeat testing frequently. So get all you can.
Interesting, that the full article does not back up the statement in the title below. Deep learning does not always need 100K examples, we did well with much less. It depends on the nature and use of the solution, the variability of the data. The general scale statement of about 10K and up is reasonable.
Google Brain chief: Deep learning takes at least 100,000 examples
By Blair Hanley Frank in Venturebeat via KDNuggets
While the current class of deep learning techniques is helping fuel the AI wave, one of the frequently cited drawbacks is that they require a lot of data to work. But how much is enough data?
“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.” .... '
Wednesday, October 25, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment