←back to thread

323 points steerlabs | 1 comments | | HN request time: 0s | source
Show context
jqpabc123 ◴[] No.46153440[source]
We are trying to fix probability with more probability. That is a losing game.

Thanks for pointing out the elephant in the room with LLMs.

The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.

replies(17): >>46155764 #>>46191721 #>>46191867 #>>46191871 #>>46191893 #>>46191910 #>>46191973 #>>46191987 #>>46192152 #>>46192471 #>>46192526 #>>46192557 #>>46192939 #>>46193456 #>>46194206 #>>46194503 #>>46194518 #
Davidzheng ◴[] No.46191721[source]
lol humans are non-deterministic too
replies(4): >>46191924 #>>46191926 #>>46193770 #>>46194093 #
rthrfrd ◴[] No.46191926[source]
But we also have a stake in our society, in the form of a reputation or accountability, that greatly influences our behaviour. So comparing us to an LLM has always been meaningless anyway.
replies(2): >>46191956 #>>46192331 #
1. jennyholzer ◴[] No.46191956[source]
[flagged]