/* ---- Google Analytics Code Below */

Monday, December 17, 2018

Quantum Neural Networks being Explored

In the Google AI Blog.   Quite an interesting play here, updating neural nets in a quantum fashion.  Two more technical papers pointed to.   Faster, or perhaps a means of introducing more subtle aspects of learning and creativity?  This is a technical piece by its nature, but reasonably clearly written.  Not quite obvious when we can expect results from this work.  Points to the need for new training methods needed.   Perhaps the training also needs to be quantum in nature?  We know Google worked with quantum devices like D-Wave early on, and tested it for Binary Image Classification.  Watching.

Exploring Quantum Neural Networks
Monday, December 17, 2018
Posted by Jarrod McClean, Senior Research Scientist and Hartmut Neven, Director of Engineering, Google AI Quantum Team

Since its inception, the Google AI Quantum team has pushed to understand the role of quantum computing in machine learning. The existence of algorithms with provable advantages for global optimization suggest that quantum computers may be useful for training existing models within machine learning more quickly, and we are building experimental quantum computers to investigate how intricate quantum systems can carry out these computations. While this may prove invaluable, it does not yet touch on the tantalizing idea that quantum computers might be able to provide a way to learn more about complex patterns in physical systems that conventional computers cannot in any reasonable amount of time.

Today we talk about two recent papers from the Google AI Quantum team that make progress towards understanding the power of quantum computers for learning tasks. The first constructs a quantum model of neural networks to investigate how a popular classification task might be carried out on quantum processors. In the second paper, we show how peculiar features of quantum geometry change the strategies for training these networks in comparison to their classical counterparts, and offer guidance towards more robust training of these networks..... "

No comments: