# Google Unveils 72-Qubit Quantum Computer With Low Error Rates

Google's Bristlecone quantum computer

Google announced a 72-qubit universal quantum computer that promises the same low error rates the company saw in its first 9-qubit quantum computer. Google believes that this quantum computer, called Bristlecone, will be able to bring us to an age of quantum supremacy.

### Ready For Quantum Supremacy

Google has teased before that it would build a 49-qubit quantum computer to achieve “quantum supremacy.” This achievement would show that quantum computers can perform some well-defined science problems faster than the fastest supercomputers in the world can.

In a recent announcement, Google said:

If a quantum processor can be operated with low enough error, it would be able to outperform a classical supercomputer on a well-defined computer science problem, an achievement known as quantum supremacy. These random circuits must be large in both number of qubits as well as computational length (depth).

Although no one has achieved this goal yet, we calculate quantum supremacy can be comfortably demonstrated with 49 qubits, a circuit depth exceeding 40, and a two-qubit error below 0.5%. We believe the experimental demonstration of a quantum processor outperforming a supercomputer would be a watershed moment for our field, and remains one of our key objectives.

Not long after Google started talking about its 49-qubit quantum computer, IBM showed that for some specific quantum applications, 56 qubits or more may be needed to prove quantum supremacy. It seems Google wanted to remove all doubt, so now it’s experimenting with a 72-qubit quantum computer.

Don’t let the numbers fool you, though. Right now, the most powerful supercomputers can simulate only 46 qubits and for every new qubit that needs to be simulated, the memory requirements typically double (although some system-wide efficiency can be gained with new innovations).

Therefore, in order for us to simulate a 72-qubit quantum computer, we’d need millions of times more RAM (2^(72-46)). We probably won’t be able to use that much RAM in a supercomputer anytime soon, so if Bristlecone will be able to run any algorithm faster than our most powerful supercomputers, then the quantum supremacy era will have arrived.

### High Number Of Qubits Is Not Enough

A high number of qubits is not the only thing that’s needed to achieve quantum supremacy. You also need qubits with low error rates so they don’t mess-up the calculations. A useful quantum computer is a function of both number of qubits and error rate.

According to Google, a minimum error rate for quantum computers needs to be in the range of less than 1%, coupled with close to 100 qubits. Google seems to have achieved this so far with 72-qubit Bristlecone and its 1% error rate for readout, 0.1% for single-qubit gates, and 0.6% for two-qubit gates.

Google Quantum AI Lab's intended progress for quantum computers

Quantum computers will begin to become highly useful in solving real-world problems when we can achieve error rates of 0.1-1% coupled with hundreds of thousand to millions of qubits.

According to Google, an ideal quantum computer would have at least hundreds of millions of qubits and an error rate lower than 0.01%. That may take several decades to achieve, even if we assume a “Moore’s Law” of some kind for quantum computers (which so far seems to exist, seeing the progress of both Google and IBM in the past few years, as well as D-Wave).

That said, we may start seeing some "useful" applications of quantum computers well before that. For instance, breaking most existing cryptography may be possible when the quantum computers have only a few thousand qubits. If the current rate of progress for quantum computers holds, we may be able to reach that in about a decade.

Google is “cautiously optimistic” that the Bristlecone quantum computer will not only achieve quantum supremacy, but could also be used as a testbed for researching qubit scalability and error rates, as well as applications such as simulation, optimization, and machine learning.

And without a quantum processor you can't encrypt all your data with a quantum key because take too much time for encrypt/uncrypt the message. The futur is pretty bad for security.

Also, call me when they have quantum speedup on a range of tasks as broad as a GPU can handle. That's probably going to take a crapload of qubits. If it can only accelerate very specific problems, it's more in line with a custom ASIC.

A.I life will find a way!!!!!

If you are implying something along the lines of password cracking via trillions of attempts per second, something like this is avoidable through time delay implementations.

It's not about brute force guessing a password really quickly, it's about breaking the encryption itself.

For example, AFAIK a lot of public key cryptography is based around the difficulty of integer factorization for traditional computers. Public key cryptography forms the basis for most encrypted electronic communication. Quantum computers have the potential to be much faster at factorization than traditional computers, and thus can break said encryption.

Or to quote wikipedia:

"The problem with currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems can be easily solved on a sufficiently powerful quantum computer running Shor's algorithm."https://en.wikipedia.org/wiki/Post-quantum_cryptography

Unfortunately, I believe AI to require a QC-core to be extremely effective, so we might have to wait awhile before we can see real-world benefits.

https://www.sciencedaily.com/releases/2017/08/170802103051.htm

https://www.chemistryworld.com/feature/quantum-chemistry-on-quantum-computers/3007680.article

Once this has been achieved, one may just insert the previously solvable (by Quantum Computers) hash value into the Quantum Computers Hash Function in order to determine a new hash function without having to make major changes to existing infrastructure.

Yes, it is one reason that I am extremely doubtful about the future of automated vehicles/ robots for use in war or security applications. Encrypted code may well become meaningless with the development of quantum computers. Meaning that war never changes, and we are right back with humans doing the fighting. I actually wrote a book which touched on this subject, although no one read it.

Well the brain has 72 petaflops of processing power

just what situations can tolerate an error rate?

air traffic control? maybe not for instance.

[witness the Limiting Error Rate graph above in the article.]

maybe weather simulations where a bad cloud solution isn't a big thing but there

are other more personal applications where errors are an issue.

if one assumes EC threshold declines, but not to 0, can the

quantum EC every catch up with the quantum computations?

i can easily imagine Google tossing ...hey errors don't count... out there.

Engineering design and analysis, especially aerodynamics and strength testing.

Effectiveness of medical treatments for things like cancer, etc.

Anything and everything genetics related.

Essentially anything supercomputers work on now would be thrown at a quantum computer. Here are a few of the science applications https://www.livescience.com/6392-9-super-cool-supercomputers.html

I think you're confusing quantum computers with the memory requirements of

classicalcomputers to simulate quantum computers.If memory were such a hurdle, Google wouldn't be talking about scaling up the number of qubits like they are.

We shouldn't assume these will

everbe cost-effective for the public to have in their own homes. For the foreseeable future, they will live in the cloud.Most optimization problems are fairly tolerant of errors, since it's often acceptable to get a

nearlyoptimal answer. Neural networks would be another.Although, realistically, someone is just going to break the blockchain and crate "legitimate" transactions from any wallet to any other wallet until they've paid off their fancy computer. Then they'll crash the currency down to 0 and move on to another one until there are no longer any that can function on traditional computers.

Or maybe they don't want to profit at all. It could just be some Google researcher who's mad that GPUs are expensive

Thank you for the explanation!