←back to thread

335 points ingve | 5 comments | | HN request time: 0.238s | source
1. NooneAtAll3 ◴[] No.45084643[source]
> I think a more plausible amount of optimization would produce a circuit with 500x the cost of the factoring-15 circuit

I don't get this part

If author already produced "115x", how can optimizations make it worse?

replies(4): >>45084837 #>>45084872 #>>45085102 #>>45086295 #
2. tromp ◴[] No.45084837[source]
The 115x was obtained by doing a (less plausable) large amount of optimization...
3. turtletontine ◴[] No.45084872[source]
I think the idea is the minimal implementation will be unstable and unreliable. I don’t know the details, but there’s much work and thought on quantum error correcting qubits - where you hook up N qubits in a clever way to function as one very stable qubit. Terms such as “decoherence time” make appearances. You can imagine this quickly expands into an awful lot of qubits.
4. tsimionescu ◴[] No.45085102[source]
The point was that the amount of optimization work that went into making the circuit for factoring 21 only ~100x larger than the one for factoring 15 is not feasible for larger numbers. So, the general rule should be that you need ~500x more gates for a similar increase in the number to be factored, not just ~100x.
5. Strilanc ◴[] No.45086295[source]
The more plausible amount of optimization is less optimization. Or, more accurately, the benefits of optimization at large sizes is expected to be less beneficial than it was for the N=21 circuit.