←back to thread

323 points steerlabs | 2 comments | | HN request time: 0s | source
Show context
jacquesm ◴[] No.46193288[source]
LLMs are text model, not world models and that is the root cause of the problem. If you and I would be discussing furniture and for some reason you had assumed the furniture to be glued to the ceiling instead of standing on the floor (contrived example) then it would most likely only take one correction based on your actual experience that you are probably on the wrong track. An LLM will happily re-introduce that error a few ping-pongs later and re-establish the track it was on before because that apparently is some kind of attractor.

Not having a world model is a massive disadvantage when dealing with facts, the facts are supposed to re-inforce each other, if you allow even a single fact that is nonsense then you can very confidently deviate into what at best would be misguided science fiction, and at worst is going to end up being used as a basis to build an edifice on that simply has no support.

Facts are contagious: they work just like foundation stones, if you allow incorrect facts to become a part of your foundation you will be producing nonsense. This is my main gripe with AI and it is - funny enough - also my main gripe with some mass human activities.

replies(2): >>46193375 #>>46195374 #
coldtea ◴[] No.46193375[source]
>LLMs are text model, not world models and that is the root cause of the problem.

Is it though? In the end, the information in the training texts is a distilled proxy for the world, and the weighted model ends up being a world model, just an once-removed one.

Text is not that different to visual information in that regard (and humans base their world model on both).

>Not having a world model is a massive disadvantage when dealing with facts, the facts are supposed to re-inforce each other, if you allow even a single fact that is nonsense then you can very confidently deviate into what at best would be misguided science fiction, and at worst is going to end up being used as a basis to build an edifice on that simply has no support.

Regular humans believe all kinds of facts that are nonsense, many others that are wrong, and quite a few that are even counter to logic too.

And short of omnipresense and omniscience, directly examining the whole world, any world model (human or AI), is built on sets of facts many of which might not be true or valid to begin with.

replies(3): >>46193476 #>>46195701 #>>46198081 #
1. mrguyorama ◴[] No.46198081[source]
>In the end, the information in the training texts is a distilled proxy for the world

This is routinely asserted. How has it been proven?

Humans write all sorts of text that has zero connection to reality, even when they are ostensibly writing about reality.

Training on ancient greek philosophy which was expressly written to distill knowledge about the real world would produce a stupid LLM that doesn't know about the real world, because the training text was itself wrong about the underlying world.

Also, if LLMs were able to extract underlying truth from training material, why can't they do math very well? It would be easy to train an LLM on only correct math, and indeed you could generate any size corpus of provably correct math you want. I assume someone somewhere has demonstrated success training a neural network on math and having it regenerate something like "addition" or whatever, but how well would such a process survive if a large fraction of it's training material was instead just incorrect math?

The training text is nothing more than human generated text, and asserting anything about that more concrete than "Humans consider this text good enough to be worth writing" is fallacious.

This even applies if your training corpus is, for example, only physics scientific papers that have been strongly replicated and are likely "true". Unless the LLM is also trained on the data itself, the only information available is what the humans thought and wrote. There's no definite link between that and actual reality, which is why physics accepted an "Aether" for so long. The data we had up to that point aligned with our incorrect models. You could not disambiguate between the wrong Aetheric models and a better model with the data we had, and that would remain true of text written about the data.

Humans suck at distilling fact out of reality despite our direct connection to it for all sorts of fun reasons you can read about in psychology, but if you disconnect a human from reality, it only gets worse.

Why would you believe LLMs could possibly be different? A model trained on bad data cannot magically figure out which data is bad.

replies(1): >>46205762 #
2. jacquesm ◴[] No.46205762[source]
I think a key insight from your comment is that in order to be able to verify whether the stuff we allow into our brains gets permanent billing we test it against our world model and if it does not fit we reject it. LLMs accept anything in the training set so curation of the training set is a big factor in the quality of the LLMs output. That's an incremental improvement, not a massive leap forward but it definitely will help to reduce the percentage of bullshit created.