←back to thread

302 points JnBrymn | 1 comments | | HN request time: 0.001s | source
Show context
tcdent ◴[] No.45677440[source]
"Kill the tokenizer" is such a wild proposition but is also founded in fundamentals.

Tokenizing text is such a hack even though it works pretty well. The state-of-the-art comes out of the gate with an approximation for quantifying language that's wrong on so many levels.

It's difficult to wrap my head around pixels being a more powerful representation of information, but someone's gotta come up with something other than tokenizer.

replies(4): >>45677780 #>>45678765 #>>45680186 #>>45680335 #
dgently7 ◴[] No.45677780[source]
I consume all text as images when I read as a vision capable person so it kinda passes the evolution does it that way test and maybe we shouldn’t be that surprised that vision is a great input method?

Actually thinking more about that I consume “text” as images and also as sounds… I kinda wonder if instead of render and ocr like this suggests we did tts and just encoded like the mp3 sample of the vocalization of the word if that would be less bytes than the rendered pixels version… probably depends on the resolution / sample rate.

replies(2): >>45678180 #>>45678258 #
1. psadri ◴[] No.45678258[source]
The pixel to sounds would pass through “reading” so there might be information loss. It is no longer just pixels.