Google's claimed to achieve the coveted quantum supremacy level of computing with its Sycamore system back in 2019. But the problem it used to make the claim has just been solved by today's accelerator of choice — the GPU. As reported by Science, researchers in China have recently managed to solve the same computational problem that led Google to claim the title, despite being equipped with "just" 512 GPUs, which supercharged some smart changes to the original algorithm. The whole concept of quantum supremacy, however, refers to the moment when a quantum computer solves a problem that would be impossible for a classical computer to do.
At the time, Google said it would take the then-fastest supercomputer - the IBM-provided Summit - an unholy 10,000 years to solve the same computation that its Sycamore quantum computer crunched in 200 seconds. The Chinese team's 512 GPUs took fifteen hours to do the same.
It's just another reminder that both time and quantum computing are relative - which is understandable, considering the relative state of infancy of the technology.
Google's claim of the quantum supremacy title rested on them discovering a pattern of interference in the qubit's values. As quantum computing is a fickle master, all current approaches to it are prone to decoherence, which refers to how the environment and the qubit's design and operation introduce errors into its calculations.
From these operational errors, and by running the same algorithm through Sycamore for 200 seconds (and millions of iterations), Google then extrapolated a result, showing the pattern of the processor's deviations from the exact, correct values that it should be outputting. These deviations occurred because the errors increased the likelihood of certain outputs compared to others; this pattern was ultimately visualized through a spiky graph that could be reliably reproduced.
This graphical representation of the relationship between errors and outputs is what Google claimed gave it quantum supremacy. And this same graph is what was achieved by the Chinese scientists. To achieve that, they represented the problem through a 3D mathematical array - a matrix - which enabled their 512 GPU's specialized tensor cores to solve it by simply multiplying the values in the array.
“I think they’re right that if they had access to a big enough supercomputer, they could have simulated the … task in a matter of seconds,” Scott Aaronson, a computer scientist at the University of Texas, told Science. The Chinese team places this estimation at 12 seconds of compute time.
In fairness, Google's scientists did leave a caveat in their paper. Sergio Boixo, principal scientist for Google Quantum AI, said in an email to Science that "classical algorithms would improve”. And improve they did — perhaps a bit too fast, dulling the edge of Google's claim and ultimately proving IBM's objections right.
But the Google engineers did stress one point: technology is forever evolving, and quantum computing is now running through a leaps and bounds stage that is now a few and far between occurrence for classical systems. If Google's Sycamore had been able to provide the spiky outline with greater fidelity than it did (at 0.2%), today's quantum computers would be able to do it better, due to improvements in error correction.
The low fidelity achieved by Sycamore was exactly the bit that gave the Chinese scientists some leeway — they just improved their calculation's fidelity to 0.37%. That's enough to beat Sycamore, but still a far cry from what is theoretically possible. That fact and the in-development nature of quantum computing led Sergio to add that “we don’t think this classical approach can keep up with quantum circuits in 2022 and beyond."
And while that too is very likely to be correct, it would seem that Google has to remove the quantum supremacy trophy from its wall. Other hands are sure to rise to claim it. It's only a matter of probability - and as such, it's also just a matter of time.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
Seem to me there is more gain in efficiency from improving the algorithm than you do by running it on a better computer.Reply
thisisaname said:Seem to me there is more gain in efficiency from improving the algorithm than you do by running it on a better computer.
Yes agreed. Despite progress in computer hardware, it's still insufficient to overcome that 10,000yrs time (10k faster is still 1yr).
15hrs may seem long but it's mighty impressive compared to 10,000 yrs
Please clarify. Are you suggesting that:thisisaname said:Seem to me there is more gain in efficiency from improving the algorithm than you do by running it on a better computer.
a) The quantum computer is the better computer but the algorithm used by 512 GPUs is more efficient
b) The 512 GPUs are the better computer and the algorithm used in quantum computing is more efficient.
I don't know whether to praise you or rant at you. :nomouth:
Why not both?thisisaname said:Seem to me there is more gain in efficiency from improving the algorithm than you do by running it on a better computer.