For years, the biggest hurdle in quantum computing has been scale. While quantum processors can already tackle complex simulations in chemistry, material science, and data security, most remain too small and fragile to be practical for large-scale applications.
A new study led by the University of California, Riverside, suggests that may be changing.
Researchers demonstrated through simulations that multiple small quantum chips can be linked together into one functioning system even if the connections between them aren’t flawless.
The finding points to a path for building larger, fault-tolerant quantum computers sooner than expected.
“Our work isn’t about inventing a new chip,” said Mohamed A. Shalby, the paper’s first author and a doctoral candidate in UCR’s Department of Physics and Astronomy.
“It’s about showing that the chips we already have can be connected to create something much larger and still work. That’s a foundational shift in how we build quantum systems.”
Scaling, in this context, means handling ever-larger amounts of data without breaking down. Fault tolerance refers to a system’s ability to detect and correct errors automatically. Together, they form the backbone of reliable quantum computing.
Chips linked, errors corrected
In practice, connecting quantum chips has been difficult because links between separate processors tend to be noisy, especially when housed in different cryogenic refrigerators.
“Connections between separate chips — especially those housed in separate cryogenic refrigerators — are much noisier than operations within a single chip,” Shalby explained. “This increased noise can overwhelm the system and prevent error correction from working properly.”
The UC Riverside-led team found, however, that even when inter-chip links were up to 10 times noisier than the chips themselves, the system could still detect and correct errors.
“This means we don’t have to wait for perfect hardware to scale quantum computers,” Shalby said. “We now know that as long as each chip is operating with high fidelity, the links between them can be ‘good enough’ — not perfect — and we can still build a fault-tolerant system.”
Building reliable quantum systems
The research highlights why simply counting qubits isn’t enough.
Individual “logical” qubits (the usable building blocks of quantum programs) are created by combining hundreds or even thousands of physical qubits. This redundancy allows the system to correct errors that naturally creep in.
One of the most effective techniques is the surface code, in which a quantum processor encodes logical qubits by detecting and fixing mistakes within its own architecture. Shalby’s team simulated thousands of modular designs using this method, testing them across multiple levels of error and noise.
The results suggest scalable, reliable quantum systems could be built using today’s imperfect hardware.
“Until now, most quantum milestones focused on increasing the sheer number of qubits,” Shalby said. “But without fault tolerance, those qubits aren’t useful. Our work shows we can build systems that are both scalable and reliable — now, not years from now.”
The research drew inspiration from earlier work at MIT and used tools from Google Quantum AI. It was supported by the National Science Foundation and conducted with collaborators in Germany.