←back to thread

237 points JnBrymn | 1 comments | | HN request time: 0.973s | source
Show context
sabareesh ◴[] No.45675879[source]
It might be that our current tokenization is inefficient compared to how well image pipeline does. Language already does lot of compression but there might be even better way to represent it in latent space
replies(3): >>45675953 #>>45676049 #>>45677115 #
1. ◴[] No.45677115[source]