/* ---- Google Analytics Code Below */

Tuesday, April 05, 2022

Spooky Actions at Work

 Linking Spooky action of Quantum to Machine Learning,

Spooky Action Could Help Boost Quantum Machine Learning Mysterious quantum links could help lead to exponential scale-up     By Charles  Q. Choi in Spectrum IEEE

Machine learning, which now powers speech recognition, computer vision, and more, could prove even more powerful when run on quantum computers. Now scientists find the strange quantum phenomenon known as entanglement, which Einstein dubbed “spooky action at a distance,” might help remove a major potential roadblock to implementing quantum machine learning, a new study finds.

Quantum computers can theoretically prove more powerful than any conventional computer on a number of tasks, such as finding a number’s prime factors—the mathematical foundation of the modern encryption currently protecting banking and other secure data. The more components known as qubits that are linked together in a quantum computer through entanglement—wherein multiple particles can influence each other instantaneously regardless of how far apart they are—the greater its computational power can grow, in an exponential fashion.

Scientists are still researching the specific problems for which quantum computing might have an advantage over classical computing. Recently, they have begun exploring whether quantum computing might help boost machine learning, the field of AI that investigates algorithms that improve automatically through experience.

One potential application of quantum machine learning is simulating quantum systems—for instance, chemical reactions that might yield insights leading to next-generation batteries or new drugs. This might entail creating models of the molecules of interest, having them interact, and using experiments of how the actual compounds interact as training data to help improve the models.

A potential major stumbling block that quantum machine learning may face is the so-called “no free lunch” theorem. The theorem suggests any machine learning algorithm is as good as, but no better than, any other when their performance is averaged over many problems and sets of training data.  .... "

No comments: