```html Authors: Sergey Bravyi Andrew W. Cross Jay M. Gambetta Dmitri Maslov Patrick Rall Theodore J. Yoder Abstract The accumulation of physical errors , , prevents the execution of large-scale algorithms in current quantum computers. Quantum error correction promises a solution by encoding logical qubits onto a larger number of physical qubits, such that the physical errors are suppressed enough to allow running a desired computation with tolerable fidelity. Quantum error correction becomes practically realizable once the physical error rate is below a threshold value that depends on the choice of quantum code, syndrome measurement circuit and decoding algorithm . We present an end-to-end quantum error correction protocol that implements fault-tolerant memory on the basis of a family of low-density parity-check codes . Our approach achieves an error threshold of 0.7% for the standard circuit-based noise model, on par with the surface code , , , that for 20 years was the leading code in terms of error threshold. The syndrome measurement cycle for a length- code in our family requires ancillary qubits and a depth-8 circuit with CNOT gates, qubit initializations and measurements. The required qubit connectivity is a degree-6 graph composed of two edge-disjoint planar subgraphs. In particular, we show that 12 logical qubits can be preserved for nearly 1 million syndrome cycles using 288 physical qubits in total, assuming the physical error rate of 0.1%, whereas the surface code would require nearly 3,000 physical qubits to achieve said performance. Our findings bring demonstrations of a low-overhead fault-tolerant quantum memory within the reach of near-term quantum processors. 1 2 3 4 k n 5 6 7 8 9 10 n n Main Quantum computing attracted attention due to its ability to offer asymptotically faster solutions to a set of computational problems compared to the best known classical algorithms . It is believed that a functioning scalable quantum computer may help solve computational problems in such areas as scientific discovery, materials research, chemistry and drug design, to name a few , , , . 5 11 12 13 14 The main obstacle to building a quantum computer is the fragility of quantum information, owing to various sources of noise affecting it. As isolating a quantum computer from external effects and controlling it to induce a desired computation are in conflict with each other, noise appears to be inevitable. The sources of noise include imperfections in qubits, materials used, controlling apparatus, state preparation and measurement errors and a variety of external factors ranging from local man-made, such as stray electromagnetic fields, to those inherent to the Universe, such as cosmic rays. See ref. for a summary. Whereas some sources of noise can be eliminated with better control , materials and shielding , , , several other sources appear to be difficult if at all possible to remove. The last kind can include spontaneous and stimulated emission in trapped ions , , and the interaction with the bath (Purcell effect) in superconducting circuits—covering both leading quantum technologies. Thus, error correction becomes a key requirement for building a functioning scalable quantum computer. 15 16 17 18 19 20 1 2 3 The possibility of quantum fault tolerance is well-established . Encoding a logical qubit redundantly into many physical qubits enables one to diagnose and correct errors by repeatedly measuring syndromes of parity-check operators. However, error correction is only beneficial if the hardware error rate is below a certain threshold value that depends on a particular error correction protocol. The first proposals for quantum error correction, such as concatenated codes , , , focused on demonstrating the theoretical possibility of error suppression. As understanding of quantum error correction and the capabilities of quantum technologies matured, the focus shifted to finding practical quantum error correction protocols. This resulted in the development of the surface code , , , that offers a high error threshold close to 1%, fast decoding algorithms and compatibility with the existing quantum processors relying on two-dimensional (2D) square lattice qubit connectivity. Small examples of the surface code with a single logical qubit have already been demonstrated experimentally by several groups , , , , . However, scaling up the surface code to 100 or more logical qubits would be prohibitively expensive due to its poor encoding efficiency. This spurred interest in more general quantum codes known as low-density parity-check (LDPC) codes . Recent progress in the study of LDPC codes suggests that they can achieve quantum fault tolerance with a much higher encoding efficiency . Here, we focus on the study of LDPC codes, as our goal is to find quantum error correction codes and protocols that are both efficient and possible to demonstrate in practice, given the limitations of quantum computing technologies. 4 21 22 23 7 8 9 10 24 25 26 27 28 6 29 A quantum error correcting code is of LDPC type if each check operator of the code acts only on a few qubits and each qubit participates in only a few checks. Several variants of the LDPC codes have been proposed recently including hyperbolic surface codes , , , hypergraph product , balanced product codes , two-block codes based on finite groups , , , and quantum Tanner codes , . The latter were shown , to be asymptotically ‘good’ in the sense of offering a constant encoding rate and linear distance: a parameter quantifying the number of correctable errors. By contrast, the surface code has an asymptotically zero encoding rate and only square-root distance. Replacing the surface code with a high-rate, high-distance LDPC code could have major practical implications. First, the fault-tolerance overhead (the ratio between the number of physical and logical qubits) could be reduced notably. Second, high-distance codes show a very sharp decrease in the logical error rate: as the physical error probability crosses the threshold value, the amount of error suppression achieved by the code can increase by orders of magnitude even with a small reduction of the physical error rate. This feature makes high-distance LDPC codes attractive for near-term demonstrations that are likely to operate in the near-threshold regime. However, it was previously believed that outperforming the surface code for realistic noise models including memory, gate and state preparation and measurement errors may require very large LDPC codes with more than 10,000 physical qubits . 30 31 32 33 34 35 36 37 38 39 40 39 40 31 Here we present several concrete examples of high-rate LDPC codes with a few hundred physical qubits equipped with a low-depth syndrome measurement circuit, an efficient decoding algorithm and a fault-tolerant protocol for addressing individual logical qubits. These codes show an error threshold close to 0.7%, show excellent performance in the near-threshold regime and offer a 10 times reduction of the encoding overhead compared with the surface code. Hardware requirements for realizing our error correction protocols are relatively mild, as each physical qubit is coupled by two-qubit gates with only six other qubits. Although the qubit connectivity graph is not locally embeddable into a 2D grid, it can be decomposed into two planar degree-3 subgraphs. As we argue below, such qubit connectivity is well suited for architectures based on superconducting qubits. Our codes are a generalization of bicycle codes proposed by MacKay et al. and studied in more depth in refs. , , . We named our codes bivariate bicycle (BB) because they are based on bivariate polynomials, as detailed in the . These are stabilizer codes of the Calderbank–Shor–Steane (CSS) type , that can be described by a collection of six-qubit check (stabilizer) operators composed of Pauli and . At a high level, a BB code is similar to the two-dimensional toric code . In particular, physical qubits of a BB code can be laid out on a two-dimensional grid with periodic boundary conditions such that all check operators are obtained from a single pair of and checks by applying horizontal and vertical shifts of the grid. However, in contrast to the plaquette and vertex stabilizers describing the toric code, check operators of BB codes are not geometrically local. Furthermore, each check acts on six qubits rather than four qubits. We will describe the code by a Tanner graph such that each vertex of represents either a data qubit or a check operator. A check vertex and a data vertex are connected by an edge if the th check operator acts non-trivially on the th data qubit (by applying Pauli or ). See Fig. for example Tanner graphs of surface and BB codes, respectively. The Tanner graph of any BB code has vertex degree six and graph thickness equal to two, which means it can be decomposed into two edge-disjoint planar subgraphs ( ). Thickness-2 qubit connectivity is well suited for superconducting qubits coupled by microwave resonators. For example, two planar layers of couplers and their control lines can be attached to the top and the bottom side of the chip hosting qubits, and the two sides mated. 41 35 36 42 Methods 43 44 X Z 7 X Z G G i j i j X Z 1a,b 29 Methods , Tanner graph of a surface code, for comparison. , Tanner graph of a BB code with parameters [[144, 12, 12]] embedded into a torus. Any edge of the Tanner graph connects a data and a check vertex. Data qubits associated with the registers ( ) and ( ) are shown by blue and orange circles. Each vertex has six incident edges including four short-range edges (pointing north, south, east and west) and two long-range edges. We only show a few long-range edges to avoid clutter. Dashed and solid edges indicate two planar subgraphs spanning the Tanner graph, see the . , Sketch of a Tanner graph extension for measuring and following ref. , attaching to a surface code. The ancilla corresponding to the measurement can be connected to a surface code, enabling load-store operations for all logical qubits by means of quantum teleportation and some logical unitaries. This extended Tanner graph also has an implementation in a thickness-2 architecture through the and edges ( ). a b q L q R Methods c 50 A B Methods A BB code with parameters [[ , , ]] encodes logical qubits into data qubits offering a code distance , meaning that any logical error spans at least data qubits. We divide data qubits into registers ( ) and ( ) of size /2 each. Any check acts on three qubits from ( ) and three qubits from ( ). The code relies on ancillary check qubits to measure the error syndrome. We divide check qubits into registers ( ) and ( ) of size /2 that collect syndromes of and types, respectively. In total, the encoding relies on 2 physical qubits. The net encoding rate is therefore = /(2 ). For example, the standard surface code architecture encodes = 1 logical qubit into = 2 data qubits for a distance- code and uses − 1 check qubits for syndrome measurements. The net encoding rate is ≈ 1/(2 2), which quickly becomes impractical as one is forced to choose a large code distance, due to, for instance, the physical errors being close to the threshold value. By contrast, BB codes have encoding rate ≫ 1/ 2, see Table for code examples. To the best of our knowledge, all codes shown in Table are new. The distance-12 code [[144, 12, 12]] may be the most promising for near-term demonstrations, as it combines large distance and high net encoding rate = 1/24. For comparison, the distance-11 surface code has a net encoding rate = 1/241. Below, we show that the distance-12 BB code outperforms the distance-11 surface code for the experimentally relevant range of error rates. n k d k n d d n q L q R n q L q R n n q X q Z n X Z n r k n k n d d n r d r d 1 1 r r To prevent the accumulation of errors one must be able to measure the error syndrome frequently enough. This is accomplished by a syndrome measurement circuit that couples data qubits in the support of each check operator with the respective ancillary qubit by a sequence of CNOT gates. Check qubits are then measured revealing the value of the error syndrome. The time it takes to implement the syndrome measurement circuit is proportional to its depth: the number of gate layers composed of non-overlapping CNOTs. As new errors continue to occur while the syndrome measurement circuit is executed, its depth should be minimized. The full cycle of syndrome measurement for a BB code is illustrated on Fig. . The syndrome cycle requires only seven layers of CNOTs regardless of the code length. The check qubits are initialized and measured at the beginning and at the end of the syndrome cycle respectively (see the for details). The circuit respects the cyclic shift symmetry of the underlying code. 2 Methods Full cycle of syndrome measurements relying on seven layers of CNOTs. We provide a local view of the circuit that only includes one data qubit from each register ( ) and ( ). The circuit is symmetric under horizontal and vertical shifts of the Tanner graph. Each data qubit is coupled by CNOTs with three *X-*check and three *Z-*check qubits: see the for more details. q L q R Methods The full error correction protocol performs c ≫ 1 syndrome measurement cycles and then calls a decoder: a classical algorithm that takes as input the measured syndromes and outputs a guess of the final error on the data qubits. Error correction succeeds if the guessed and the actual error coincide modulo a product of check operators. In this case, the two errors have the same action on any encoded (logical) state. Thus, applying the inverse of the guessed error returns data qubits to the initial logical sate. Otherwise, if the guessed and the actual error differ by a non-trivial logical operator, error correction fails resulting in a logical error. Our numerical experiments are based on the belief propagation with an ordered statistics decoder (BP-OSD) proposed by Panteleev and Kalachev . The original work described BP-OSD in the context of a toy noise model with memory errors only. Here we show how to extend BP-OSD to the circuit-based noise model, see the for details. Our approach closely follows refs. , , , . N 36 36 Supplementary Information 45 46 47 48 A noisy version of the syndrome measurement circuit may include several types of faulty operations such as memory errors on idle data or check qubits, faulty CNOT gates, qubit initializations and measurements. We consider the circuit-based noise model in which each operation fails independently with probability . The probability of a logical error L depends on the error rate , details of the syndrome measurement circuits, and the decoding algorithm. Let L( c) be the logical error probability after performing c syndrome cycles. Define the logical error rate as . Informally, L can be viewed as the logical error probability per syndrome cycle. Following common practice, we choose c = for a distance- code. Figure shows the logical error rate achieved by codes from Table . The logical error rate was computed numerically for ≥ 10−3 and extrapolated to lower error rates using a fitting formula ( ). The pseudo-threshold 0 is defined as a solution of the break-even equation L( ) = . Here is an estimate of the probability that at least one of unencoded qubits suffers from an error. BB codes offer a pseudo-threshold close to 0.7%, see Table , which is nearly the same as the error threshold of the surface code and exceeds the threshold of all high-rate LDPC codes known to the authors. 10 p p p P N N p N d d 3 1 p Methods p p p kp kp k 1 49 , Logical versus physical error rate for small examples of BB LDPC codes. A numerical estimate of L (diamonds) was obtained by simulating syndrome cycles for a distance- code. Most of the data points have error bars roughly equal to L/10 due to sampling errors. , Comparison between the BB LDPC code [[144, 12, 12]] and surface codes with 12 logical qubits and distance ∈ {9, 11, 13, 15}. The distance- surface code with 12 logical qubits has the length = 12 2 because each logical qubit is encoded into a separate × patch of the surface code lattice. a p d d p b d d n d d d For example, suppose that the physical error rate is = 10−3, which is a realistic goal for near-term demonstrations. Encoding 12 logical qubits using the distance-12 code from Table p 1