←back to thread

323 points steerlabs | 1 comments | | HN request time: 0s | source
Show context
jqpabc123 ◴[] No.46153440[source]
We are trying to fix probability with more probability. That is a losing game.

Thanks for pointing out the elephant in the room with LLMs.

The basic design is non-deterministic. Trying to extract "facts" or "truth" or "accuracy" is an exercise in futility.

replies(17): >>46155764 #>>46191721 #>>46191867 #>>46191871 #>>46191893 #>>46191910 #>>46191973 #>>46191987 #>>46192152 #>>46192471 #>>46192526 #>>46192557 #>>46192939 #>>46193456 #>>46194206 #>>46194503 #>>46194518 #
encyclopedism ◴[] No.46194518[source]
I couldn't agree with you more.

I really do find it puzzling so many on HN are convinced LLM's reason or think and continue to entertain this line of reasoning. At the same time also somehow knowing what precisely the brain/mind does and constantly using CS language to provide correspondences where there are none. The simplest example being that LLM's somehow function in a similar fashion to human brains. They categorically do not. I do not have most all of human literary output in my head and yet I can coherently write this sentence.

As I'm on the subject LLM's don't hallucinate. They output text and when that text is measured and judged by a human to be 'correct' then it is. LLM's 'hallucinate' because that is literally what they can ONLY do, provide some output given some input. They don't actually understand anything about what they output. It's just text.

My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) will do the same thing as the latest SOTA LLM. It's just an algorithm.

I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.

replies(4): >>46195534 #>>46197321 #>>46197431 #>>46198125 #
zby ◴[] No.46198125[source]
Most of things that were considered reasoning are now trivially implemented by computers - from arithmetic, through logical inference (surely this is reasoning - isn't it) to playing chess. Now LLMs go even further - what is your definition of reasoning? What concrete action is in that definition that you are sure computer will not do in lets say 5 years?
replies(1): >>46198934 #
1. encyclopedism ◴[] No.46198934[source]
The definition of things such as reasoning, understanding, intellect are STILL open academic questions. Quite literally humans greatest minds are currently attempting to tease out definitions, whatever we currently have falls short. For example see the hard problem of consciousness.

However I can attempt to provide insight by taking the opposite approach here. For instance what is NOT reasoning. Getting a computer to follow a series of steps (an algorithm) is NOT reasoning. A chess computer is NOT reasoning it is following a series of steps. The implications of assuming that the chess computer IS reasoning would have profound affects on so much, for example it would imply your digital thermostat also reasons!