Metrics can help get us out of purely hype based impressions.
The challenge of finding reliable AI performance benchmarks
By James Kobielus in SiliconAngle
Artificial intelligence can be extremely resource-intensive. Generally, AI practitioners seek out the fastest, most scalable, most power-efficient and lowest-cost hardware, software and cloud platforms to run their workloads.
As the AI arena shifts toward workload-optimized architectures, there’s a growing need for standard benchmarking tools to help machine learning developers and enterprise information technology professionals assess which target environments are best suited for any specific training or inferencing job. Historically, the AI industry has lacked reliable, transparent, standard and vendor-neutral benchmarks for flagging performance differences between different hardware, software, algorithms and cloud configurations that might be used to handle a given workload.
In a key AI industry milestone, the newly formed MLPerf open-source benchmark group last week announced the launch of a standard suite for benchmarking the performance of ML software frameworks, hardware accelerators and cloud platforms. The group — which includes Google, Baidu, Intel, AMD and other commercial vendors, as well as research universities such as Harvard and Stanford — is attempting to create an ML performance-comparison tool that is open, fair, reliable, comprehensive, flexible and affordable. ... "
Sunday, July 01, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment