Progress in algorithms makes small, noisy quantum computers viable

Hybrid algorithms can accommodate limited qubits, lack of error correction for real-world tasks

August 12, 2021

Quantum,Processor,Chip,Powerful,Supercomputer,,Modern,Technology,And,Computing,Concept
Despite the limitations of today’s hardware, variational quantum algorithms can solve many problems of interest for quantum computers, including include solving linear equations, simulating quantum systems, and developing algorithms for sensors.

LOS ALAMOS, N.M., Aug. 12, 2021—As reported in a new article in Nature Reviews Physics, instead of waiting for fully mature quantum computers to emerge, Los Alamos National Laboratory and other leading institutions have developed hybrid classical/quantum algorithms to extract the most performance—and potentially quantum advantage—from today’s noisy, error-prone hardware. Known as variational quantum algorithms, they use the quantum boxes to manipulate quantum systems while shifting much of the work load to classical computers to let them do what they currently do best: solve optimization problems.

“Quantum computers have the promise to outperform classical computers for certain tasks, but on currently available quantum hardware they can’t run long algorithms. They have too much noise as they interact with environment, which corrupts the information being processed,” said Marco Cerezo, a physicist specializing in quantum computing, quantum machine learning, and quantum information at Los Alamos and a lead author of the paper. “With variational quantum algorithms, we get the best of both worlds. We can harness the power of quantum computers for tasks that classical computers can’t do easily, then use classical computers to compliment the computational power of quantum devices.”

Current noisy, intermediate scale quantum computers have between 50 and 100 qubits, lose their “quantumness” quickly, and lack error correction, which requires more qubits. Since the late 1990s, however, theoreticians have been developing algorithms designed to run on an idealized large, error-correcting, fault tolerant quantum computer.

“We can’t implement these algorithms yet because they give nonsense results or they require too many qubits. So people realized we needed an approach that adapts to the constraints of the hardware we have—an optimization problem,” said Patrick Coles, a theoretical physicist developing algorithms at Los Alamos and the senior lead author of the paper.

“We found we could turn all the problems of interest into optimization problems, potentially with quantum advantage, meaning the quantum computer beats a classical computer at the task,” Coles said. Those problems include simulations for material science and quantum chemistry, factoring numbers, big-data analysis, and virtually every application that has been proposed for quantum computers.

The algorithms are called variational because the optimization process varies the algorithm on the fly, as a kind of machine learning. It changes parameters and logic gates to minimize a cost function, which is a mathematical expression that measures how well the algorithm has performed the task. The problem is solved when the cost function reaches its lowest possible value.

In an iterative function in the variational quantum algorithm, the quantum computer estimates the cost function, then passes that result back to the classical computer. The classical computer then adjusts the input parameters and sends them to the quantum computer, which runs the optimization again.

The review article is meant to be a comprehensive introduction and pedagogical reference for researches starting on this nascent field. In it, the authors discuss all the applications for algorithms and how they work, as well as cover challenges, pitfalls, and how to address them. Finally, it looks into the future, considering the best opportunities for achieving quantum advantage on the computers that will be available in the next couple of years.

Paper: “Variational Quantum Algorithms, by M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles, in Nature Reviews Physics. DOI: 10.1038/s42254-021-00348-9

Funding: U.S Department of Energy (DOE) Office of Science, Advanced Scientific Computing Research program; DOE Quantum Science Center (QSC); Laboratory Directed Research and Development program, Los Alamos National Laboratory.

About Los Alamos National Laboratory

Los Alamos National Laboratory, a multidisciplinary research institution engaged in strategic science on behalf of national security, is managed by Triad, a public service oriented, national security science organization equally owned by its three founding members: Battelle Memorial Institute (Battelle), the Texas A&M University System (TAMUS), and the Regents of the University of California (UC) for the Department of Energy’s National Nuclear Security Administration.

Los Alamos enhances national security by ensuring the safety and reliability of the U.S. nuclear stockpile, developing technologies to reduce threats from weapons of mass destruction, and solving problems related to energy, environment, infrastructure, health, and global security concerns.

LA-UR-21-28052