The hidden engineering challenge in building quantum computers
Four decades ago, quantum computing was pure intellectual curiosity, a thought experiment about whether the laws of quantum mechanics could be harnessed for computation. Today, it's no longer a question of whether. Quantum computers exist. They run. They demonstrably execute quantum algorithms on hundreds of physical qubits. The hard part is done, right?
Wrong. The hard part is just starting. We've built quantum computers. Now comes the actual engineering challenge: making them useful. The gap between "hundreds of qubits executing toy demonstrations" and "millions of qubits solving real problems" isn't another physics problem. It's a systems engineering problem, and the playbook for solving it already exists. We just need to apply it.
This is what makes the scaling challenge so interesting. The physics of quantum computation is well understood. The problem isn't hidden in exotic phenomena or undiscovered laws. It's sitting right in front of us, visible and measurable: quantum information is fragile, errors accumulate constantly, and fixing those errors is expensive. Scaling requires solving that economic problem through better hardware, smarter architecture, and ruthless attention to detail.
From laboratory curiosity to engineering problem
Imagine trying to compute with soap bubbles that pop if you look at them wrong. That's roughly how quantum bits work. They exist in superposition, holding multiple states simultaneously until measured. That superposition is fragile. Interact with the environment, and it collapses. Wait too long, and it drifts. Apply an operation, and there's a chance it fails. The result is a constant, invisible erosion of the quantum information you're trying to preserve.
This isn't a new problem in the history of computing. It's a familiar one with a familiar solution. Classical computers have been correcting errors for decades. If you send a message and a bit flips, classical systems can detect and fix it. They do this by redundancy: send the message multiple times, compare results, take a vote. The approach works because classical bits can be copied freely.
Quantum bits cannot be copied. The no-cloning theorem forbids it, right there in the foundation of quantum mechanics. So the entire playbook of classical error correction is unavailable. Quantum systems need something more clever and substantially more expensive.
The surface code and its brutal cost
Enter the surface code, a remarkable discovery that transformed quantum computing from "elegant in theory" to "possibly viable in practice." The surface code works by arranging many physical qubits in a two-dimensional grid where they're entangled in a specific pattern. Errors reveal themselves through a diagnostic signal without collapsing the superposition. You detect which errors happened without destroying the information you're protecting. It's an elegant bit of physics.
The cost is equally remarkable: to build one reliable logical qubit using surface codes, you might need 1,000 physical qubits. The actual number depends on how good your physical qubits are and how clever your error correction code is, but that factor of 1,000 is not a pessimistic estimate. It's roughly where the physics and mathematics point us given current hardware specifications.
This ratio is the real story of quantum computing over the next decade. Everything that follows flows from this brutal fact. If you want a quantum computer with 1 million logical qubits (enough for genuinely useful applications), you need roughly 1 billion physical qubits. If your fabrication process can make qubits with error rates of 10^-3 per operation, you're stuck building billion-qubit systems. But if you can improve the error rate to 10^-4, the required physical qubits drop by an order of magnitude or more. The exponent matters enormously.
The surface code works, but only below an error threshold. That threshold is roughly 10^-3 for well-designed codes. Below it, adding more qubits reliably improves performance. Above it, adding qubits makes things worse. Most quantum computers today operate above threshold. Getting below it is the central scaling challenge.
Three levers for engineering the path forward
Here's where the research shifts from problem to actionable solution. Scaling quantum computers doesn't require waiting for revolutionary physics. It requires pulling three specific engineering levers that are well understood in isolation but powerful in combination.
The first lever is hardware quality through existing technology. Superconducting qubits can be made substantially better by borrowing fabrication techniques from the semiconductor industry. Precise material preparation, better substrate control, tighter tolerances, and careful attention to parasitic coupling all reduce error rates. This isn't inventing new qubit types. It's manufacturing the ones we have, much more carefully. The paper shows in detailed analysis that moving from laboratory prototypes to industrial semiconductor fabrication standards could improve error rates by an order of magnitude. That single improvement cascades through all downstream calculations.
The second lever is systems architecture. Stop thinking about building one monolithic quantum computer. Instead, build heterogeneous systems where quantum cores work alongside classical processors. The quantum system handles the operations where quantum parallelism matters. The classical system handles syndrome extraction (figuring out which errors happened), feedback control (adjusting future operations based on those errors), problem setup, and orchestration. This division of labor reduces the total size of quantum system you need. It also lets you build specialized quantum accelerators for specific problem classes rather than trying to solve everything with one general-purpose design.
The third lever is realistic error modeling and optimization. Current designs often assume error rates are uniform everywhere: every qubit has the same error rate, every operation fails at the same rate. In reality, qubits vary. Two-qubit gates fail more often than single-qubit gates. Some physical operations are inherently noisier than others. Instead of designing for the worst case everywhere, the paper shows that detailed accounting of real, heterogeneous error distributions lets you optimize around them. This reduces resource requirements by orders of magnitude without changing the hardware. It's pure engineering discipline.
What the hardware actually needs to be
After understanding the problem and the three levers, the natural question becomes concrete: what are the actual targets? What error rate do we need? How many qubits? The paper provides something rare and valuable here: detailed resource estimates based on realistic hardware models and actual applications.
Current superconducting qubits have error rates around 10^-3, which is above threshold. Target-generation hardware could reach 10^-4 through better fabrication. Desired-generation hardware might hit 10^-4.5 to 10^-5. Each step matters enormously. The relationship between physical qubit error rate and logical qubit count is nonlinear. As you approach threshold from above, resource requirements drop dramatically. Cross below threshold, and they drop faster still.
For a modestly useful quantum chemistry calculation, the resource estimates are striking. With current hardware, you might need millions of physical qubits. With target-generation improvements, hundreds of thousands. With desired specifications, tens of thousands. These aren't marginal gains. They're orders of magnitude differences driven by incremental engineering improvements.
The paper also shows sensitivity analysis: how does computation time scale with qubit quality? With current hardware, some useful calculations might take days or weeks. With target improvements, they take hours. With desired specifications, minutes. This frames hardware improvements not as abstract research goals but as concrete speedups to real problems.
Actual applications that justify the engineering
Theory matters, but application matters more. What problems are actually worth engineering a million-qubit system to solve?
The paper provides detailed resource analysis for several applications. Quantum chemistry calculations sit at the top of the list, unsurprising given the explosive growth of interest in using quantum computers for drug discovery and materials science. Simulating molecular behavior at the quantum level to understand reaction rates, binding energies, and transition states is exactly what quantum computers should excel at. Classical simulation of quantum systems becomes exponentially expensive as the system size grows. Quantum systems can simulate themselves directly, at least in principle.
The second major application is catalyst design. Industrial processes like fertilizer production run through catalytic reactions that are poorly understood and expensive to optimize. Being able to screen catalyst candidates computationally rather than building physical prototypes for each one could dramatically accelerate materials discovery.
NMR spectroscopy simulation is a third target. Nuclear magnetic resonance is a crucial tool in chemistry and materials science, but interpreting NMR data and predicting spectroscopic properties requires simulating quantum behavior. Quantum computers could dramatically improve this.
The fourth application is Fermi-Hubbard simulation, a fundamental model system in condensed matter physics that quantum computers can simulate efficiently while classical computers struggle.
For each application, the paper shows: how many logical qubits you need, how long the computation takes, how many total physical qubits that translates to under different hardware assumptions. These are detailed circuit-level breakdowns, not back-of-envelope estimates. They account for realistic gate depths, error correction overhead, and the actual structure of the algorithms. The result is a clear picture of what's within reach.
The punchline is both exciting and grounded: these applications are genuinely useful, the resource requirements are large but not impossible, and they're achievable if engineering discipline is applied. With target-generation hardware, quantum computers could deliver real utility to pharmaceutical and materials companies within the 2030s or 2040s. That's not tomorrow, but it's also not science fiction.
The hybrid architecture that actually works
The future of quantum computing probably isn't replacing all classical computers with quantum ones. It's building classical and quantum processors designed to work together from the start, not bolted together afterwards.
Think of it like GPU computing today. Graphics processors are phenomenally good at specific parallel tasks, but the CPU still orchestrates the computation. Quantum processors will work similarly. They're phenomenally good at quantum simulation and optimization problems where quantum parallelism provides exponential advantage. But the classical system handles everything else.
Practically, this means classical processors manage syndrome extraction, the relentless process of diagnosing which errors occurred in the quantum system so they can be corrected. They handle feedback control, adjusting future quantum operations based on error information. They manage problem setup and result extraction, pulling useful answers out of quantum superpositions. And they orchestrate multiple quantum cores if you're building distributed systems.
The research point here is that custom-designed accelerators for specific problem classes yield better results than trying to build one general-purpose quantum system. An accelerator designed specifically for quantum chemistry simulations can be more efficient than generic hardware trying to handle chemistry, optimization, machine learning, and whatever else. This stratified approach mirrors how classical high-performance computing evolved: general-purpose processors paired with domain-specific accelerators, the whole system orchestrated together.
The engineering path forward
The landscape that emerges from this research is surprisingly clear-eyed. Scaling quantum computers to useful sizes is not a physics problem. The physics is understood. It's an engineering problem of the familiar kind: managing constraints, making tradeoffs, and executing with discipline.
The promise of quantum computing rests on solid ground. The algorithms work. The error correction theory works. Prototypes demonstrate the basic principles. What remains is translating that foundation into machines with millions of qubits and controllable, suppressed errors. That requires better hardware made with semiconductor-industry discipline. It requires architectural choices that pair quantum and classical processing strategically. It requires detailed resource analysis that accounts for real hardware imperfections rather than idealized assumptions.
The paper's central insight is straightforward but powerful: we know what needs to happen. We don't need to wait for revolutionary discoveries. We need to apply engineering rigor, steal techniques from adjacent fields, and measure everything carefully. The timeline is uncertain, but the direction is clear. That's how hard engineering problems get solved.
This is a Plain English Papers summary of a research paper called How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits. If you like these kinds of analysis, join AIModels.fyi or follow us on Twitter.
