←back to thread

152 points fzliu | 1 comments | | HN request time: 0.2s | source
Show context
bionhoward ◴[] No.43563017[source]
How does this compare with Byte Latent Transformer [1]? This happens with convolution post-embedding while BLT happens with attention at embedding time?

1. https://ai.meta.com/research/publications/byte-latent-transf...

replies(1): >>43563056 #
1. janalsncm ◴[] No.43563056[source]
As I understand it, BLT uses a small nn to tokenize but doesn’t change the attention mechanism. MTA uses traditional BPE for tokenization but changes the attention mechanism. You could use both (latency be damned!)