←back to thread

114 points roboboffin | 2 comments | | HN request time: 0.408s | source
Show context
sigmar ◴[] No.42197504[source]
>AlphaQubit, a recurrent-transformer-based neural-network architecture that learns to predict errors in the logical observable based on the syndrome inputs (Methods and Fig. 2a). This network, after two-stage training—pretraining with simulated samples and finetuning with a limited quantity of experimental samples (Fig. 2b)—decodes the Sycamore surface code experiments more accurately than any previous decoder (machine learning or otherwise)

>One error-correction round in the surface code. The X and Z stabilizer information updates the decoder’s internal state, encoded by a vector for each stabilizer. The internal state is then modified by multiple layers of a syndrome transformer neural network containing attention and convolutions.

I can't seem to find a detailed description of the architecture beyond this bit in the paper and the figure it references. Gone are the days when Google handed out ML methodologies like candy... (note: not criticizing them for being protective of their IP, just pointing out how much things have changed since 2017)

replies(1): >>42198250 #
jncfhnb ◴[] No.42198250[source]
Eh. It was always sort of muddy. We never actually had an implementation of doc2vec as described in the paper.
replies(2): >>42198387 #>>42199098 #
1. myownpetard ◴[] No.42198387[source]
That's because attention is all we need.
replies(1): >>42199642 #
2. griomnib ◴[] No.42199642[source]
…and a green line by the GOOG ticker.