/* ---- Google Analytics Code Below */

Friday, December 02, 2022

Breaking the Scaling Limits of Analog Optical Computing

 Reducing error in optical networks 

Breaking the Scaling Limits of Analog Computing   By MIT News, November 30, 2022

Using the new technique, the larger an optical neural network becomes, the lower the error in its computations. 

Massachusetts Institute of Technology researchers have developed a technique that greatly reduces the error in an optical neural network.

As machine-learning models become larger and more complex, they require faster and more energy-efficient hardware to perform computations. Conventional digital computers are struggling to keep up.

An analog optical neural network could perform the same tasks as a digital one, such as image classification or speech recognition, but because computations are performed using light instead of electrical signals, optical neural networks can run many times faster while consuming less energy.

However, these analog devices are prone to hardware errors that can make computations less precise. Microscopic imperfections in hardware components are one cause of these errors. In an optical neural network that has many connected components, errors can quickly accumulate.

Even with error-correction techniques, due to fundamental properties of the devices that make up an optical neural network, some amount of error is unavoidable. A network that is large enough to be implemented in the real world would be far too imprecise to be effective.

From MIT News   View Full Article     

No comments: