Image by author + ChatGPT As hardware, yes—as useful tools, not so much. As hardware, yes—as useful tools, not so much. Technology usually exists, or it doesn’t. Quantum computers occupy a third state. Suspended from ceilings in glass-walled rooms and cooled to within a hair of absolute zero, they run circuits and return results. But a constant fizz of random errors means that for any substantial task, those results dissolve into noise. It’s the field’s biggest problem. The machines exist, the computers do not. Why not? Because quantum bits, or qubits, are skittish: they flip, fade, and pick up noise from stray heat or vibration. Researchers can keep a handful in line and use statistics to coax results, but stretch to hundreds, and the slip-ups multiply beyond control. Every extra gate is another roll of the dice, every measurement another chance for noise to creep in. Deep circuits—the kind needed for drug discovery or global supply chain optimization—unravel long before they finish. qubits Two fixes often get muddled. One is error mitigation, a statistical technique that tidies outputs from today’s noisy chips—good for demos, but not a long-term solution. The other is error correction, a much stronger approach that uses many physical qubits to form a single, longer-lived logical qubit, constantly checked for mistakes and nudged back into place. Done well, the error rate falls as the code gets bigger—and the system can begin to scale. logical qubit But that’s expensive. One single logical qubit needs an army of physical ones—hundreds or even thousands. The most commonly used approach, the surface code, looks simple: tile qubits like a chessboard and use neighbors to spot mistakes. The size determines its error-correcting power, or distance. A small distance-3 grid, for example, can fix a single error, and a more robust distance-5 grid can correct two. That’s the trade-off: boosting the distance suppresses error, but multiplies the hardware and the headaches of keeping the contraption cold, quiet, and synchronized. surface code distance A recent paper in Nature from Google shows them meeting the trade-off. On a superconducting chip, researchers scaled a surface-code memory from distance five to seven. The error rate fell to about 0.14 per cent error per cycle, and the 101-qubit device turned out more than twice as reliable as its parts. paper paper Nature Rival platforms are making different bets to the same end. Trapped-ion machines favor fidelity over speed—nearly flawless qubits, but slower clocks—and have started to stack fully fault-tolerant building blocks, including awkward non-Clifford gates needed for universal computation. IBM’s superconducting rig backs the opposite trade-off: scale and tempo, trying to wring value now with large error-mitigated simulations (“useful quantum computing,” in the firm’s own words) while error correction catches up. Others, like neutral atom-arrays, chase scale through modularity. The final destinations are all the same: long, deep circuits made reliable by error correction. own words own words Quantum isn’t working yet, but the world isn’t waiting. America’s standards body, NIST, has finalized the first post-quantum cryptography standards, and governments and firms have started on years-long migrations that will touch everything from browsers to bank hardware. Britain’s cyber-security agency has told critical infrastructure operators to be ready to finish by the mid-2030s. But all roads lead back to error correction. What counts as progress from here are three things: proof that logical errors shrink with distance, a practical recipe that scales to dozens of logical qubits, and one fully corrected computation beyond classical computers’ reach. For now, the hardware is here, the computer isn’t. That’s no cause for cynicism. Most technologies worked in miniature long before they scaled: engines slipped, transistors leaked, rockets wobbled. Tame the errors, and the computer will come into being.