/* ---- Google Analytics Code Below */

Tuesday, December 03, 2019

Improving Quantum Error Correction

The results in Quantum Computing are statistical, yet the results need to be precise.  How do we correct the results to precision?   A mostly non-technical discussion of the progress in CACM.

Closing In on Quantum Error Correction   By Don Monroe
Communications of the ACM, October 2019, Vol. 62 No. 10, Pages 11-13
10.1145/3355371

After decades of research, quantum computers are approaching the scale at which they could outperform their "classical" counterparts on some problems. They will be truly practical, however, only when they implement quantum error correction, which combines many physical quantum bits, or qubits, into a logical qubit that preserves its quantum information even when its constituents are disrupted. Although this task once seemed impossible, theorists have developed multiple techniques for doing so, including "surface codes" that could be implemented in an integrated-circuit-like planar geometry.

For ordinary binary data, errors can be corrected, for example, using the majority rule: A desired bit, whether 1 or 0, is first triplicated as 111 or 000. Later, even if one of the three bits has been corrupted, the other two "outvote" it and allow recovery of the original data. Unfortunately, the "no-cloning" theorem of quantum mechanics forbids the duplication (or triplication) of a qubit.

Moreover, the power of quantum computing emerges from having arbitrary mixtures of bit values. Since any combination is valid, it would seem impossible to detect a change from the original combination, let alone correct it.

"Most people who were doing quantum information in the '80s and '90s would say we'll never be able to do error correction in these systems, because it's analog error correction," explained Raymond Laflamme of the University of Waterloo and the Perimeter Institute, both in Waterloo, Ontario.

Fortunately, those experts Laflamme spoke of were mistaken.

Unlike a classical bit that has a value of either 0 or 1, a single qubit can encode a weighted "superposition" of both values. A set of N qubits can thus represent 2N different states simultaneously. There is strong mathematical evidence that devices handling qubits could solve some large problems much faster than traditional computers could, even in principle.

In the mid-1990s, mathematician Peter Shor, then at AT&T Bell Laboratories in Murray Hill, NJ, described a powerful example of using such a quantum algorithm to efficiently factor large numbers. Because public-key encryption relies on this task being impractically slow, the result helped transform quantum computing from a physics curiosity to a technology with potential national security implications.

Quantum computations exploit the fact that the ultimate weights assigned to different configurations of qubits are the sum of contributions that can be positive, negative, or even complex numbers. The algorithms combine qubits so that these contributions reinforce one another for the desired solution, but cancel each other out for the many incorrect configurations. A key step in Shor's algorithm, for example, is finding the repetition period of a sequence of numbers generated from the factoring target, essentially by performing a Fourier transform on the entire sequence that produces a peak at the correct periodicity.

The final step is measuring the qubits to see which are 1 and which are 0. The probability of a particular outcome is proportional the square of the corresponding weight. The measurement leaves each qubit unambiguously in the measured state.  .... "

No comments: