Ultimately key to making this work. Measuring the results, starting with benchmarks.
How to Evaluate Machine Learning? U of Toronto Research Supports Latest Benchmark Initiative
U of Toronto News By Nina Haikara
An industrial-academic consortium that includes Google, the University of Toronto (U of T) in Canada, and Harvard and Stanford universities is developing a new benchmark suite for assessing machine learning (ML) performance. U of T's Gennady Pekhimenko says the MLPerf consortium is investigating two benchmarking areas--an "open" category in which any model can be applied to a fixed dataset, and a "closed" category in which both model and datasets are fixed, making execution time, power requirements, and design-cost evaluations helpful. Pekhimenko notes his laboratory has developed an open source benchmark suite called TBD (To Be Determined) as a training benchmark for deep neural networks. "We're interested in understanding how well available hardware and software perform, but we also look at both hardware and software efficiency," he says. "We then provide hints to the ML developers, so they can make their networks more efficient, and hence develop new algorithms and insights faster." .... '
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment