/* ---- Google Analytics Code Below */

Monday, September 09, 2019

Accelerating AI with Open Source, and More

Update on MLIR, which we had looked at.   See who has joined the consortium.  Architecture always being a key element to doing anything well.   And to do things efficiently it makes lots of sense to share the work.   I would further add there should be better shared ways to manage varying data  'infrastructures' by problem domains, in both the semantics of the data and its metadata.   Lets make that happen too.

Chris Lattner, Distinguished Engineer, TensorFlow
Tim Davis,  Product Manager, TensorFlow

Machine learning now runs on everything from cloud infrastructure containing GPUs and TPUs, to mobile phones, to even the smallest hardware like microcontrollers that power smart devices. The combination of advancements in hardware and open-source software frameworks like TensorFlow is making all of the incredible AI applications we’re seeing today possible--whether it’s predicting extreme weather, helping people with speech impairments communicate better, or assisting farmers to detect plant diseases. 

But with all this progress happening so quickly, the industry is struggling to keep up with making different machine learning software frameworks work with a diverse and growing set of hardware. The machine learning ecosystem is dependent on many different technologies with varying levels of complexity that often don't work well together. The burden of managing this complexity falls on researchers, enterprises and developers. By slowing the pace at which new machine learning-driven products can go from research to reality, this complexity ultimately affects our ability to solve challenging, real-world problems. 

Earlier this year we announced MLIR, open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. It offers new infrastructure and a design philosophy that enables machine learning models to be consistently represented and executed on any type of hardware. And today we’re announcing that we’re contributing MLIR to the nonprofit LLVM Foundation. This will enable even faster adoption of MLIR by the industry as a whole.   .... " 

No comments: