←back to thread

323 points steerlabs | 3 comments | | HN request time: 0.019s | source
Show context
jacquesm ◴[] No.46193288[source]
LLMs are text model, not world models and that is the root cause of the problem. If you and I would be discussing furniture and for some reason you had assumed the furniture to be glued to the ceiling instead of standing on the floor (contrived example) then it would most likely only take one correction based on your actual experience that you are probably on the wrong track. An LLM will happily re-introduce that error a few ping-pongs later and re-establish the track it was on before because that apparently is some kind of attractor.

Not having a world model is a massive disadvantage when dealing with facts, the facts are supposed to re-inforce each other, if you allow even a single fact that is nonsense then you can very confidently deviate into what at best would be misguided science fiction, and at worst is going to end up being used as a basis to build an edifice on that simply has no support.

Facts are contagious: they work just like foundation stones, if you allow incorrect facts to become a part of your foundation you will be producing nonsense. This is my main gripe with AI and it is - funny enough - also my main gripe with some mass human activities.

replies(2): >>46193375 #>>46195374 #
coldtea ◴[] No.46193375[source]
>LLMs are text model, not world models and that is the root cause of the problem.

Is it though? In the end, the information in the training texts is a distilled proxy for the world, and the weighted model ends up being a world model, just an once-removed one.

Text is not that different to visual information in that regard (and humans base their world model on both).

>Not having a world model is a massive disadvantage when dealing with facts, the facts are supposed to re-inforce each other, if you allow even a single fact that is nonsense then you can very confidently deviate into what at best would be misguided science fiction, and at worst is going to end up being used as a basis to build an edifice on that simply has no support.

Regular humans believe all kinds of facts that are nonsense, many others that are wrong, and quite a few that are even counter to logic too.

And short of omnipresense and omniscience, directly examining the whole world, any world model (human or AI), is built on sets of facts many of which might not be true or valid to begin with.

replies(3): >>46193476 #>>46195701 #>>46198081 #
1. pessimizer ◴[] No.46195701[source]
People have an actual world model, though, that they have to deal with in order to get the food into their mouths or to hit the toilet properly.

The "facts" that they believe that may be nonsense are part of an abstract world model that is far from their experience, for which they never get proper feedback (such as the political situation in Bhutan, or how their best friend is feeling.) In those, it isn't surprising that they perform like an LLM, because they're extracting all of the information from language that they've ingested.

Interestingly, the feedback that people use to adjust the language-extracted portions of their world models is how demonstrating their understanding of those models seems to please or displease the people around them, who in turn respond in physically confirmable ways. What irritates people about simpering LLMs is that they're not doing this properly. They should be testing their knowledge with us (especially their knowledge of our intentions or goals), and have some fear of failure. They have no fear and take no risk; they're stateless and empty.

Human abstractions are based in the reality of the physical responses of the people around them. The facts of those responses are true and valid results of the articulation of these abstractions. The content is irrelevant; when there's no opportunity to act, we're just acting as carriers.

replies(1): >>46195879 #
2. jacquesm ◴[] No.46195879[source]
> Human abstractions are based in the reality of the physical responses of the people around them.

And in the physical responses of the world around them. That empiricism is the foundation of all of science and if you throw that out the end result is gibberish.

replies(1): >>46198216 #
3. mrguyorama ◴[] No.46198216[source]
The physical responses of the world around them after you have yanked the concept outside of the human brain

We have to blind medical professionals during science because even thoroughly trained and experienced professionals are still more likely to form conclusions and opinions based on understood human biases than reality.

You can take a gambling addict and teach them as much statistics and probability as you want, and even if they demonstrably learned it, they will still go back to the slots and believe a hit is "due" because the link between reality and the brain's construction of its internal models is extremely limited, and those models only inform the brains processes, not necessarily constrain it.

I will never understand however how some people think that an LLM can pull a signal out of it's training material that doesn't actually exist in its training material.

It's like training an LLM on monopoly games and expecting it to be good at chess. What?