←back to thread

277 points simianwords | 3 comments | | HN request time: 0s | source
Show context
rhubarbtree ◴[] No.45152883[source]
I find this rather oddly phrased.

LLMs hallucinate because they are language models. They are stochastic models of language. They model language, not truth.

If the “truthy” responses are common in their training set for a given prompt, you might be more likely to get something useful as output. Feels like we fell into that idea and said - ok this is useful as an information retrieval tool. And now we use RL to reinforce that useful behaviour. But still, it’s a (biased) language model.

I don’t think that’s how humans work. There’s more to it. We need a model of language, but it’s not sufficient to explain our mental mechanisms. We have other ways of thinking than generating language fragments.

Trying to eliminate cases where a stochastic model the size of an LLM gives “undesirable” or “untrue” responses seems rather odd.

replies(9): >>45152948 #>>45153052 #>>45153156 #>>45153672 #>>45153695 #>>45153785 #>>45154058 #>>45154227 #>>45156698 #
crabmusket ◴[] No.45153695[source]
> I don’t think that’s how humans work.

Every time this comes up I have to bring up Deutsch. He has the best description of intelligent cognition that I've come across. He takes Popper's "conjecture and criticism" approach to science and argues that this guess-and-check loop applies to all our thinking.

E.g. understanding spoken language has some elements of guessing what might have been said and checking that against the sounds we heard. Visual processing has similar analogies.

LLMs seem to be great at conjecturing stuff, but seem incapable of checking or even knowing they need to check.

replies(1): >>45156238 #
1. codethief ◴[] No.45156238[source]
> Every time this comes up I have to bring up Deutsch. He has the best description of intelligent cognition that I've come across.

Would you have a reference?

replies(1): >>45158924 #
2. crabmusket ◴[] No.45158924[source]
If you like books, read The Beginning of Infinity. If you don't, I can't help! I wish there were something I could point to online, but nothing really encapsulates the lessons I took from that book. Yes, I'll have to write that thing one day.
replies(1): >>45160097 #
3. codethief ◴[] No.45160097[source]
Thanks so much!