←back to thread

323 points steerlabs | 3 comments | | HN request time: 0.625s | source
Show context
jqpabc123 ◴[] No.46153440[source]
We are trying to fix probability with more probability. That is a losing game.

Thanks for pointing out the elephant in the room with LLMs.

The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.

replies(17): >>46155764 #>>46191721 #>>46191867 #>>46191871 #>>46191893 #>>46191910 #>>46191973 #>>46191987 #>>46192152 #>>46192471 #>>46192526 #>>46192557 #>>46192939 #>>46193456 #>>46194206 #>>46194503 #>>46194518 #
1. UniverseHacker ◴[] No.46192471[source]
Specifically, they are capable of inductive logic but not deductive logic. In practice, this may not be a serious limitation, if they get good enough at induction to still almost always get the right answer.
replies(1): >>46192556 #
2. psychoslave ◴[] No.46192556[source]
What about abduction though?
replies(1): >>46195484 #
3. UniverseHacker ◴[] No.46195484[source]
You’ll have to wait for the FOOM “Fast Onset of Overwhelming Mastery” for that I’m afraid.