←back to thread

161 points roboboffin | 4 comments | | HN request time: 0.2s | source
Show context
outworlder ◴[] No.42198355[source]
So, an inherently error-prone computation is being corrected by another very error prone computation?
replies(3): >>42198861 #>>42198868 #>>42198986 #
1. sctb ◴[] No.42198868[source]
No problem, said von Neumann. https://www.scottaaronson.com/qclec/27.pdf
replies(2): >>42200875 #>>42200978 #
2. limit499karma ◴[] No.42200875[source]
what he actually said: "as long as the physical error probability ε is small enough" you can build a reliable system from unreliable parts.

So it remains for you to show that AI.ε ~= QC.ε since JvN proved the case for a system made of similar parts, that is vacuum tubes, with the same error probability.

(p.s. thanks for the link)

3. benreesman ◴[] No.42200978[source]
A quick careless Google didn’t yield Scott Aaronson’s take on this, which as a layperson is the one take I’d regard seriously.

Has he remarked on it and my search-fu failed?

replies(1): >>42203912 #
4. AlexWilkinsNS ◴[] No.42203912[source]
Yes, he gave comments for a New Scientist piece about it: "“It’s tremendously exciting,” says Scott Aaronson at the University of Texas at Austin. “It’s been clear for a while that decoding and correcting the errors quickly enough, in a fault-tolerant quantum computation, was going to push classical computing to the limit also. It’s also become clear that for just about anything classical computers do involving optimisation or uncertainty, you can now throw machine learning at it and they might do it better.”

https://www.newscientist.com/article/2457207-google-deepmind...