Am I crazy or have I heard this same announcement from Google and others like 5 times at this point?
The idea: Quantum Computation of Molecular Structure Using Data from Challenging-To-Classically-Simulate Nuclear Magnetic Resonance Experiments https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuan...
Verifying the result by another quantum computer (it hasn't been yet): Observation of constructive interference at the edge of quantum ergodicity https://www.nature.com/articles/s41586-025-09526-6
let a classical computer use an error prone stochastic method and it still blows the doors off of qc
this is a false comparison
There is a section in the article about future real world application, but I feel like these articles about quantum "breakthroughs" are almost always deliberately packed with abstruse language. As a result I have no sense about whether these suggested real world applications are a few years away or 50+ years away. Does anyone?
https://www.nature.com/articles/s41586-025-09526-6
In the last sentence of the abstract you will find:
"These results ... indicate a viable path to practical quantum advantage."
And in the conclusions:
"Although the random circuits used in the dynamic learning demonstration remain a toy model for Hamiltonians that are of practical relevance, the scheme is readily applicable to real physical systems."
So the press release is a little over-hyped. But this is real progress nonetheless (assuming the results actually hold up).
[UPDATE] It should be noted that this is still a very long way away from cracking RSA. That requires quantum error correction, which this work doesn't address at all. This work is in a completely different regime of quantum computing, looking for practical applications that use a quantum computer to simulate a physical quantum system faster than a classical computer can. The hardware improvements that produced progress in this area might be applicable to QEC some day, this is not direct progress towards implementing Shor's algorithm at all. So your crypto is still safe for the time being.
I agree it's not very precise without knowing which of the world's fastest supercomputers they're talking about, but there was no need to leave out this tidbit.
"Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result."
"The results on our quantum computer matched those of traditional NMR, and revealed information not usually available from NMR, which is a crucial validation of our approach."
It certainly seems like this time, there finally is a real advantage?
The announcement is about an algorithm which they are calling Quantum Echoes, where you set up the experiment, perturb one of the qbits and observe the “echoes” through the rest of the system.
They use it to replicate a classical experiment in chemistry done using nuclear magnetic resonance imaging. They say they are able to reproduce the results of that conventional experiment and gather additional data which is unavailable via conventional means.
Edit: An effective key space of 2^64 is not secure according to modern-day standards. It was secure at the times of DES.
"Error prone" hardware is not "a stochastic resource". Error prone hardware does not provide any value to computation.
I don't disagree, but these days I'm happy to see any advanced research at all.
Granted, too often I see the world through HN-colored glasses, but it seems like so many technological achievements are variations on getting people addicted to something in order to show them ads.
Did Bellcore or Xerox PARC do a lot of university partnerships? I was into other things in those days.
It would be a pain to manage but it would be safe from quantum computing.
So basically you’re able to go directly from running the quantum experiment to being able to simulate the dynamics of the underlying system, because the Jacobian and Hessian are the first and second partial derivatives of the system with respect to all of its parameters in matrix form.
The new experiment generates the same result every time you run it (after a small amount of averaging). It also involves running a much more structured circuit (as opposed to a random circuit), so all-in-all, the result is much more 'under control.'
As a cherry on top, the output has some connection to molecular spectroscopy. It still isn't that useful at this scale, but it is much more like the kind of thing you would hope to use a quantum computer for someday (and certainly more useful than generating random bitstrings).
Where is the exact threat?
I'll add this to my list of useful phrases.
Q: Hey AndrewStephens, you promised that task would be completed two days ago. Can you finish it today?
A: Results indicate a viable path to success.
No amount of software fixes can update this. In theory once an attack becomes feasible on the horizon they could update to post-quantum encryption and offer the ability to transfer from old-style addresses to new-style addresses, but this would be a herculean effort for everyone involved and would require all holders (not miners) to actively update their wallets. Basically infeasible.
Fortunately this will never actually happen. It's way more likely that ECDSA is broken by mundane means (better stochastic approaches most likely) than quantum computing being a factor.
Im pretty reluctant to make any negative comments about these kinds of posts be cause it will prevent actually achieving the desired outcome.
The problem is not with these papers (or at least not ones like this one) but how they are reported. If quantum computing is going to suceed it needs to do the baby steps before it can do the big steps, and at the current rate the big leaps are probably decades away. There is nothing wrong with that, its a hard problem and its going to take time. But then the press comes in and reports that quantum computing is going to run a marathon tomorrow which is obviously not true and confuses everyone.
Hyperbolic claims like this are for shareholders who aren't qualified to judge for themselves because they're interested in future money and not actual understanding. This is what happens when you delegate science to corporations.
A nice benefit is it solves the problem with Satoshi’s (of course not a real person or owner) wallet. Satoshi’s wallet becomes the defacto quantum advantage prize. That’s a lot of scratch for a research lab.
Another response is to come to terms with a possibly meaningless and Sisyphean reality and to keep pushing the boulder (that you care about) up the hill anyway.
I’m glad the poster is concerned and/or disillusioned about the hype, hyperbole and deception associated with this type of research.
It suggests he still cares.
> in partnership with The University of California, Berkeley, we ran the Quantum Echoes algorithm on our Willow chip...
And the author affiliations in the Nature paper include:
Princeton University; UC Berkeley; University of Massachusetts, Amherst; Caltech; Harvard; UC Santa Barbara; University of Connecticut; UC Santa Barbara; MIT; UC Riverside; Dartmouth College; Max Planck Institute.
This is very much in partnership with universities and they clearly state that too.
Any rational economic actor would participate in a post-quantum hard fork because the alternative is losing all their money.
If this was a company with a $2 trillion market cap there'd be no question they'd move heaven-and-earth to prevent the stock from going to zero.
Y2K only cost $500 billion[1] adjusted for inflation and that required updating essentially every computer on Earth.
But as far as moving balances - it's up to the owners. It would start with anybody holding a balance high enough to make it worth the amount of money it would take to crack a single key. That cracking price will go down, and the value of BTC may go up. People can move over time as they see fit.
So we are all in a collective flap that someone can see my bank transactions? These are pretty much public knowledge to governments/central banks/clearing houses anyway -- doesn't seem like all that big a deal to me.
(I work on payment processing systems for a large bank)
As far as i am aware, eliptic curve is also vulnerable to quantum attacks.
The threat is generally both passive eavesdropping to decrypt later and also active MITM attacks. Both of course require the attacker to be in a position to eavesdrop.
> Let’s say you crack the encryption key used in my bank between a java payment processing system and a database server.
Well if you are sitting in the right place on the network then you can.
> how do you mitm this traffic?
Depends on the scenario. If you are government or ISP then its easy. Otherwise it might be difficult. Typical real life scenarios are when the victim is using wifi and the attacker is in the physical vicinity.
Like all things crypto, it always depends on context. What information are you trying to protect and who are you trying to protect.
All that said, people are already experimenting with PQC so it might mostly be moot by the time a quantum computer comes around. On the other hand people are still using md5 so legacy will bite.
Non-verifiable computations include things like pulling from a hard-to-compute probability distribution (i.e. random number generator) where it is faster, but the result is inherently not the same each time.
This paper on verifiable advantage is a lot more compelling. With Scott Aaronson and Quantinuum among other great researchers
if you can read the TLS session in general, you can capture the TLS session ticket and then use that to make a subsequent connection. This is easier as you dont have to be injecting packets live or make inconvinent packets disappear.
The MBA wakes up, sees the fire, sees a fire extinguisher in the corner of the room, empties the fire extinguisher to put out the fire, then goes back to sleep.
The engineer wakes up, sees the fire, sees the fire extinguisher, estimates the extent of the fire, determines the exact amount of foam required to put it out including a reasonable tolerance, and dispenses exactly that amount to put out the fire, and then satisified that there is enough left in case of another fire, goes back to sleep.
The quantum computing physicist wakes up, sees the fire, observes the fire extinguisher, determines that there is a viable path to practical fire extinguishment, and goes back to sleep.
> Quantum verifiability means the result can be repeated on our quantum computer — or any other of the same caliber — to get the same answer, confirming the result.
Not really. This would be if not instantly then when a batch goes for clearing or reconciliation, be caught -- and an investigation would be immediately started.
There are safeguards against this kind of thing that can't be really defeated by breaking some crypto. We have to protect against malicious employees etc also.
One can not simply insert bank transactions like this. They are really extremely complicated flows here.
I think this is all overhyped though. It seems likely we will have plenty of warning to migrate prior to achieving big enough quantum computers to steal wallets. Per wikipedia:
> The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates.
IIRC this is speculated to be the reason ECDSA was selected for Bitcoin in the first place.
Rememeber, it is not about quantum general computing, it's about implementing the quantum computation of Shor's algorithm
Somehow, I'm not all that scared. Perhaps I'm naive.. :}
These are fairly robust systems. You'd likely have a much better impact dossing the banks.
[1] - https://github.com/jlopp/bips/blob/quantum_migration/bip-pos...
But since we already have post quantum algorithms, the end state of cheap quantum computers is just a new equilibrium where people use the new algorithms and they can't be directly cracked and it's basically the same except maybe you can decrypt historical stuff but who knows if it's worth it.
I think what they are trying to do is to contrast these to previous quantum advantage experiments in the following sense.
The previous experiments involve sampling from some distribution, which is believed to be classically hard. However, it is a non-trivial question whether you succeed or fail in this task. Having perfect sampler from the same distribution won't allow you to easily verify the samples.
On the other hand these experiments involve measuring some observable, i.e., the output is just a number and you could compare it to the value obtained in a different way (one a different or same computer or even some analog experimental system).
Note that these observables are expectation values of the samples, but in the previous experiments since the circuits are random, all the expectation values are very close to zero and it is impossible to actually resolve them from the experiment.
Disclaimer: this is my speculation about what they mean because they didn't explain it anywhere from what I can see.
The current situation with "AI" took off because people learned their lessons from the last round of funding cuts "AI winter".
That being said any pushback against funding quantum research would be like chopped your own hands off.
So I do not think these tools or economic substrate layers are going anywhere. They are very valuable for the particular kinds of applications that can be built with them and also as additional productive layers to the credit and liquidity markets nationally, internationally, and also globally/universally.
So there is a lot of institutional interest, including governance interest, in using them to build better systems. Bitcoin on its own would be reduced in such justification but because of Ethereum's function as an engine which can drive utility, the two together are a formidable and quantum-resistant platform that can scale into the hundreds of trillions of dollars and in Ethereum's case...certainly beyond $1Q in time.
I'm very bullish on the underlying technology, even beyond tokenomics for any particular project. The underlying technologies are powerful protocols that facilitate the development and deployment of Non Zero Sum systems at scale. With Q-Day not expected until end of 2020s or beginning of 2030s, that is a considerable amount of time (in the tech world) to lay the ground work for further hardening and discussions around this.
A wind tunnel is a great tool for solving aerodynamics and fluid flow problems, more efficiently than a typical computer. But we don't call it a wind-computer, because it's not a useful tool outside of that narrow domain.
The promise of quantum computing is that it can solve useful problems outside the quantum realm - like breaking traditional encryption.
I think this current cycle is going to change that though. The kinds of projects spinning up are truly massive, innovative, and interesting. Stay tuned!
It doesn't require all holders to update their wallets. Some people would fail to do so and lose their money. That doesn't mean the rest of the network can't do anything to save themselves. Most people use hosted wallets like Coinbase these days anyway, and Coinbase would certainly be on top of things.
Also, you don't need to break ECDSA to break BTC. You could also do it by breaking mining. The block header has a 32-bit nonce at the very end. My brain is too smooth to know how realistic this actually is, but perhaps someone could do use a QC to perform the final step of SHA-256 on all 2^32 possible values of the nonce at once, giving them an insurmountable advantage in mining. If only a single party has that advantage, it breaks the Nash equilibrium.
But if multiple parties have that advantage, I suppose BTC could survive until someone breaks ECDSA. All those mining ASICs would become worthless, though.
As many times in the past quantum supremacy was claimed, and then, other groups have shown they can do better with optimized classical methods.
Even as a Googler I can find plenty of reasons to be cynical about Google (many involving AI), but the quantum computing research lab is not one of them. It's actual scientific research, funded (I assume) mostly out of advertising dollars, and it's not building something socially problematic. So why all the grief?
But apparently they haven't demonstrated the actual portability between two different quantum computers.
https://scottaaronson.blog/?p=9098
Aaronson did work at OpenAI but not on image generation, maybe you could argue the OpenAI safety team he worked on should be involved here but I'm pretty sure image generation was after his time, and even if he did work directly on image generation under NDA or something, attributing that cartoon to Aaronson would be like attributing a cartoon made in Photoshop by an antisemite to a random Photoshop programmer, unless he maliciously added antisemitic images to the training data or something.
The most charitable interpretation that I think Aaronson also has offered is that Aaronson believed Woit was an antisemite because of a genocidal chain of events that in Aaronson's belief would necessarily happen with a democratic solution and that even if Woit didn't believe that that would be the consequence, or believed in democracy deontologically and thought the UN could step in under the genocide convention if any genocide began to be at risk of unfolding, the intent of Woit could be dismissed, and Woit could therefore be somehow be lumped in with the antisemite who sent Aaronson the image.
Aaronson's stated belief also is that any claim that Isreal was commiting a genocide in the last few years is a blood-libel because he believes the population of Gaza is increasing and it can't be a genocide unless there is a population decrease during the course of it. This view of Aaronsno would imply things like if every male in Gaza was sterilized, and the UN stepped in and stopped it as a genocide, it would be a blood libel to call that genocide so long as the population didn't decrease during the course of it, even if it did decrease afterwards. But maybe he would clarify that it could include decreases that happen with a delayed effect of the actions. But these kind of strong beliefs of blood-libel I think are part of why he felt ok labeling the comic with Woit's name.
I also don't think if the population does go down or has been going down he will say it was from a genocide, but rather that populations can go down from war. He's only proposing that a population must go down as a necessary criteria of genocide, not a sufficient one. I definitely don't agree with him, to me if Hamas carried out half of an Oct 7 every day it would clearly be a genocide even if that brought the replacement rate to 1.001 and it wouldn't change anything if it brought it to 0.999.
Then, we ignore today, and launder that into a gish-gallop of free-association, torturing the meaning of words to shoehorn in the idea that all the science has it wrong and inter alia, the quantum computer uses quantum phenomena to computer so it might be a fake useless computer, like a wind tunnel. shrugs
It's a really unpleasant thing to read, reminds me of the local art school dropout hanging on my ear about crypto at the bar at 3 am in 2013.
I get that's all people have to reach for, but personally, I'd rather not inflict my free-association on the world when I'm aware I'm half-understanding, fixated on the past when discussing something current, and I can't explain the idea I have as something concrete and understandable even when I'm using technical terms.
> This is the first time in history that any quantum computer has successfully run a verifiable algorithm that surpasses the ability of supercomputers.
> guywithahat [...] I'll be waiting for Scott Adams to tell me what to think about this
Scott Adams
Text adventure guy: https://en.wikipedia.org/wiki/Scott_Adams_(game_designer)
Batshit cartoonist: https://en.wikipedia.org/wiki/Scott_Adams
(also, for fun, a cartoon by Scott Aaronson and Zack Weinersmith: https://www.smbc-comics.com/comic/the-talk-3)
I turned 50 years old this year, forgive an old man a few chuckles.
Maybe you're thinking specifically of LLM labs. I agree this is happening there, but I wouldn't be as dramatic. Everywhere else, university-corporation/government lab partnerships are still going very strong.
My understanding though is that these steps are really the very beginning. Using a quantum computer with quantum algorithms to prove that it’s possible.
Once proven (which maybe article this is claiming?) the next step is actually creating a computer with enough qubits and entanglable pairs and low enough error rates that it can be used to solve larger problems at scale.
Because my current understanding with claims like these is that they are likely true, but in the tiny.
It’d be like saying “I have a new algorithm for factoring primes that is 10000x faster than the current best, but can only factor numbers up to 103.”
I am unaware of any comics Aaronson made and I don't blame him for anything he did make or was loosely associated with. It is incredible to the extend people are willing to go to claim people are crazy though, both in regards to Adams and Aaronson.
And what the hell are you calling a gish gallop? They wrote four sentences explaining a single simple argument. If you design a way to make qubits emulate particle interactions, that's a useful tool, but it's not what people normally think of as a "computer".
And whatever you're saying about anyone claiming "all the science has it wrong" is an argument that only exists inside your own head.
"Back in 2019, we demonstrated that a quantum computer could solve a problem that would take the fastest classical supercomputer thousands of years."
The actual article has much more measured language, and in the conclusion section gives three criteria for "practical quantum advantage":
https://www.nature.com/articles/s41586-025-09526-6
"(1) The observable can be experimentally measured with the proper accuracy, in our case with an SNR above unity. More formally, the observable is in the bounded-error quantum polynomial-time (BQP) class.
(2) The observable lies beyond the reach of both exact classical simulation and heuristic methods that trade accuracy for efficiency.
[...]
(3) The observable should yield practically relevant information about the quantum system.
[...] we have made progress towards (1) and (2). Moreover, a proof-of-principle for (3) is demonstrated with a dynamic learning problem."
So none of the criteria they define for "practical quantum advantage" are fully met as far as I understand it.
The key word is "practical" - you can get quantum advantage from precisely probing a quantum system with enough coherent qubits that it would be intractable on a classical computer. But that's exactly because a quantum computer is a quantum system; and because of superposition and entanglement, a linear increase in the number of qubits means an exponential increase in computational complexity for a classical simulation. So if you're able to implement and probe a quantum system of sufficient complexity (in this case ~40 qubits rather than the thousands it would take for Shor's algorithm), that is ipso facto "quantum advantage".
It's still an impressive engineering feat because of the difficulty in maintaining coherence in the qubits with a precisely programmable gate structure that operates on them, but as far as I can see (and I've just scanned the paper and had a couple of drinks this evening) what it really means is that they've found a way to reliably implement in hardware a quantum system that they can accurately extract information from in a way that would be intractable to simulate on classical machines.
I might well be missing some subtleties because of aforementioned reasons and I'm no expert, but it seems like the press release is unsurprisingly in the grayzone between corporate hype and outright deceit (which as we know is a large and constantly expanding multi-dimensional grayzone of heretofore unimagined fractal shades of gray)
not really. but that doesn't mean it's not worth striving for. Breakthrough to commercial application are notoriously hard to predict. The only way to find out is to keep pushing at the frontier.
It's not that I don't "agree with it", there's nothing to agree with. "Not even wrong", in the Pauli sense.
I'd advise that when you're conjuring thoughts in other people's heads to make them mean, so you can go full gloves off and tell them off for what thoughts were in their head, and motivated their contributions to this forum, you pause, and consider a bit more. Especially in context of where you're encountering the behavior, say, a online discussion forum vs. a dinner party where you're observing a heated discussion among your children.
Cryptocurrencies would be the last thing I worry about w.r.t Quantum crypto attacks. Everything would be broken. Think banks, brokerage accounts, email, text messages - everything.
But if that's the only realm where anything close to supremacy has been demonstrated, being skeptical and setting your standards higher is reasonable. Not at all "not even wrong".
> I'd advise that when you're conjuring thoughts in other people's heads
Are you accusing me of strawmanning? If you think people are being "not even wrong" then I didn't strawman you at all, I accurately described your position. Your strawman about science was the only one in this comment thread. And again there was no gish gallop, and I hope if nothing else you double check the definition of that term or something.
A part of me thinks quantum computing is mostly bullshit, but I would be very happy to be wrong. I should probably learn Q# or something just to be sure.
Even so, I don't agree that quantum is a threat to crypto. There are already well known quantum-resistant encryption schemes being deployed live in browsers, today. Crypto can just start adopting one of these schemes today, and we're still probably decades away from a QC that can factor the kinds of primes that crypto security uses. The transition will be slightly more complex for proof of work schemes, since those typically have dedicated hardware - but other types of crypto coins can switch in months, most likely, if they decide to, at least by offering new wallet types or something.
https://radar.cloudflare.com/adoption-and-usage
In contrast, cryptocurrencies have to upgrade the entire network all at once or it’s effectively a painful fork. That effort appears to just be getting talked about now, without even starting to discuss timing:
In HTTPS for example, the server and client must agree on how to communicate, and we’ve already had to deprecate older, now-insecure cryptography standards. More options get added, and old ones will have to be deprecated. This isn’t a new thing, just maybe some cryptographic schemes will get rotated out earlier than expected.
I don't see why it wouldn't look like normal traffic.
> Somehow, I'm not all that scared. Perhaps I'm naive.. :}
We're talking about an attack that probably won't be practical for another 20 years , which already has counter measures that are in testing right now. Almost nobody should be worried about it.
It's very strange that some people act like switching over to a post quantum cryptography scheme is trivial. Did you watch the video I replied to, which is a talk by an actual quantum computing researcher?
Bitcoin is much more centralized than the popular imagination would have you believe, both in terms of the small number of controlling interests behind the majority of the transaction capacity, and just as importantly the shared open source software running those nodes. Moreover, the economic incentives for the switch are strongly, perhaps even perfectly, aligned among the vast majority of node operators. Bitcoin is already dangerously close to, if not beyond, the possibility of a successful Byzantine attack; it just doesn't happen precisely because of the incentive alignment--if you're that large, you don't want to undermine trust in the network, and you're an easy target for civil punishment.
that's not really the issue, the real interesting part is existing encrypted information that three letter agencies likely have dutifully stored in a vault and that's going to become readable. A lot of that communication was made under the assumption that it's secure.
What I would start worrying about is the security of things like messages sent via end-to-end encrypted services like WhatsApp and Signal. Intercepted messages can be saved now and decrypted any time in the future, so it's better to switch to more robust cryptography sooner rather than later. Signal has taken steps in this direction recently: https://arstechnica.com/security/2025/10/why-signals-post-qu....
Stragglers are a problem, of course, but that's why I thought this would be a harder problem for Bitcoin: for me to use PQC for HTTPS, only my browser and the server need to support it and past connections don't matter, whereas for a blockchain you need to upgrade the entire network to support it for new transactions _and_ have some kind of data migration for all of the existing data. I don't think that's insurmountable – Bitcoin is rather famously not as decentralized as the marketing would have you believe — but it seems like a harder level of coordination.
Its actually something we will notice. Arrests will be announced.
This SHOULD be the main application of the tech. It’s hard to tell if it is because authors can’t do science communication, they can apparently only do sales.