Quantum Computing Algorithm Breakthrough Brings Practical Use Closer to Reality
Reducing quantum computers' complexity via software updates.
Out of all common refrains in the world of computing, the phrase "if only software would catch up with hardware" would probably rank pretty high. And yet, software does sometimes catch up with hardware. In fact, it seems that this time, software can go as far as unlocking quantum computations for classical computers. That's according to researchers with the RIKEN Center for Quantum Computing, Japan, who have published work on an algorithm that significantly accelerates a specific quantum computing workload. More significantly, the workload itself - called time evolution operators - has applications in condensed matter physics and quantum chemistry, two fields that can unlock new worlds within our own.
Normally, an improved algorithm wouldn't be completely out of the ordinary; updates are everywhere, after all. Every app update, software update, or firmware upgrade is essentially bringing revised code that either solves problems or improves performance (hopefully). And improved algorithms are nice, as anyone with a graphics card from either AMD or NVIDIA can attest. But let's face it: We're used to being disappointed with performance updates.
And yet in this case, the performance gains are extraordinary. Truly, the results could hardly be more impressive. Through the improved algorithm (itself a hybrid of quantum and classical methods), future quantum computers can be made simpler than we thought possible: they'll be able to tackle bigger problems sooner than we expected, and at a lower cost. But the performance gains don't stop there. They could make it possible for conventional machines to process the degrees of complexity that only a quantum computer could supposedly solve.
“Time-evolution operators are huge grids of numbers that describe the complex behaviors of quantum materials,” explained Kaoru Mizuta of the RIKEN Center for Quantum Computing. “They’re of great importance because they give quantum computers a very practical application - better understanding quantum chemistry and the physics of solids."
The algorithm improvement does away with the Trotterization technique quantum computers deployed until now - one already suspected to be unsustainable for long-term scaling. That's due to the technique requiring enormous numbers of quantum gates, with each gate requiring a variable number of qubits programmed to perform a given function. Even IBM's 1,121-qubit-count Condor QPU (Quantum Processing Unit), which is due to be released this year, would be hard-pressed to enable as many quantum gates as Trotterization is expected to require for workloads that actually mean something in quantum computing terms.
No, quantum computing won't be happening in our smartphones. In a way, today's superconducting refrigerators could be compared to the ENIAC from before the dawn of integrated microchips. Or going from that point to the equivalent of today's fastest CPUs or Best GPUs. That's the road ahead of us for quantum - one where the starting shot still rings.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
Integr8d “We're used to being disappointed with performance updates.”Reply
Because everything is already so optimized... Quantum is fertile ground. -
I think researches from RIKEN last year 2022, also provided a prototype for the first time of quantum error correction in silicon, which was I think made possible by implementing a three-qubit Toffoli-type quantum gate. They demonstrated full control of a three-qubit system. Regardless, it was not an entirely new concept though.Reply
Not sure how is the progress going on now on this. For two qubits it was okay, but for error correction we actually need a three-qubit system.
But anyway, this new hybrid algorithm of quantum and classical methods sounds promising, and there seems to be lower computational cost to compile time-evolution operators as well.
There are still some challenges to overcome though:: I mean, for example improving the stability of the qubit, increasing the computational scalability, and also enhancing the error correction method to realize the full potential of quantum computing in atomic-level simulations/calculations.
Curious to know for this algorithm whether they are using either Python, or even Qiskit, Cirq, an/or Microsoft's Q# languages, or maybe some other ? -
DougMcC Metal Messiah. said:
Curious to know for this algorithm whether they are using either Python, or even Qiskit, Cirq, an/or Microsoft's Q# languages, or maybe some other ?
Does it matter? Anything performance relevant is eventually going to be in a c library from which various other languages will leverage it. -
husker
Yes, a person's curiosity matters.DougMcC said:Does it matter? Anything performance relevant is eventually going to be in a c library from which various other languages will leverage it. -
DougMcC said:Does it matter? Anything performance relevant is eventually going to be in a c library from which various other languages will leverage it.
You have a point, but I'm still in a learning process state, so yes I was just curious. It might help me in future projects/thesis as well.
Btw, in case anyone interested, here is the source link for that paper published recently. For more details:
https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.3.040302
The implementation of time-evolution operators on quantum circuits is important for quantum simulation. However, the standard method, Trotterization, requires a huge number of gates to achieve desirable accuracy. Here, we propose a local variational quantum compilation (LVQC) algorithm, which allows us to accurately and efficiently compile time-evolution operators on a large-scale quantum system by optimization with smaller-size quantum systems.