> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
But apparently they haven't demonstrated the actual portability between two different quantum computers.
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
"Back in 2019, we demonstrated that a quantum computer could solve a problem that would take the fastest classical supercomputer thousands of years."
The actual article has much more measured language, and in the conclusion section gives three criteria for "practical quantum advantage":
https://www.nature.com/articles/s41586-025-09526-6
"(1) The observable can be experimentally measured with the proper accuracy, in our case with an SNR above unity. More formally, the observable is in the bounded-error quantum polynomial-time (BQP) class.
(2) The observable lies beyond the reach of both exact classical simulation and heuristic methods that trade accuracy for efficiency.
[...]
(3) The observable should yield practically relevant information about the quantum system.
[...] we have made progress towards (1) and (2). Moreover, a proof-of-principle for (3) is demonstrated with a dynamic learning problem."
So none of the criteria they define for "practical quantum advantage" are fully met as far as I understand it.
The key word is "practical" - you can get quantum advantage from precisely probing a quantum system with enough coherent qubits that it would be intractable on a classical computer. But that's exactly because a quantum computer is a quantum system; and because of superposition and entanglement, a linear increase in the number of qubits means an exponential increase in computational complexity for a classical simulation. So if you're able to implement and probe a quantum system of sufficient complexity (in this case ~40 qubits rather than the thousands it would take for Shor's algorithm), that is ipso facto "quantum advantage".
It's still an impressive engineering feat because of the difficulty in maintaining coherence in the qubits with a precisely programmable gate structure that operates on them, but as far as I can see (and I've just scanned the paper and had a couple of drinks this evening) what it really means is that they've found a way to reliably implement in hardware a quantum system that they can accurately extract information from in a way that would be intractable to simulate on classical machines.
I might well be missing some subtleties because of aforementioned reasons and I'm no expert, but it seems like the press release is unsurprisingly in the grayzone between corporate hype and outright deceit (which as we know is a large and constantly expanding multi-dimensional grayzone of heretofore unimagined fractal shades of gray)