←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 4 comments | | HN request time: 0.3s | source
1. rf15 ◴[] No.44489874[source]
It still boggles my mind why an amazing text autocompletion system trained on millions of books and other texts is forced to be squeezed through the shape of a prompt/chat interface, which is obviously not the shape of most of its training data. Using it as chat reduces the quality of the output significantly already.
replies(2): >>44489893 #>>44491528 #
2. semanticc ◴[] No.44489893[source]
What's your suggested alternative?
replies(1): >>44490248 #
3. rf15 ◴[] No.44490248[source]
In our internal system we use it "as-is" as an autocomplete system; query/lead into terms directly and see how it continues and what it associates with the lead you gave.

Also visualise the actual associative strength of each token generated to confer how "sure" the model is.

LLMs alone aren't the way to AGI or an individual you can talk to in natural language. They're a very good lossy compression over a dataset that you can query for associations.

4. ethan_smith ◴[] No.44491528[source]
The chat interface is a UX compromise that makes LLMs accessible but constrains their capabilities. Alternative interfaces like document completion, outline expansion, or iterative drafting would better leverage the full distribution of the training data while reducing anthropomorphization.