←back to thread

168 points 1wheel | 4 comments | | HN request time: 0.812s | source
Show context
optimalsolver ◴[] No.40429473[source]
>what the model is "thinking" before writing its response

An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.

An LLM is doing exactly nothing while waiting for the next prompt.

replies(5): >>40429486 #>>40429493 #>>40429606 #>>40429761 #>>40429847 #
byteknight ◴[] No.40429493[source]
I disagree with this. That suggests that thinking requires persistent, malleable and non-static memory. That is not the case. You can reasonably reason about without increasing knowledge if you have a base set of logic.

I think the thing you were looking for was more along the lines of a persistent autonomous agent.

replies(2): >>40429744 #>>40430109 #
HarHarVeryFunny ◴[] No.40430109[source]
Sure you can reason over a fixed "base set of logic", although there's another word for that - an expert system with a fixed set of rules, which IMO is really the right way to view an LLM.

Still, what current LLMs are doing with their fixed rules is only a very limited form of reasoning since they just use a fixed N-steps of rule application to generate each word. People are looking to techniques such "group of experts" prompting to improve reasoning - step-wise generate multiple responses then evaluate them and proceed to next step.

replies(1): >>40430171 #
whimsicalism ◴[] No.40430171[source]
if you zoom in enough, all thinking is an expert system with a fixed set of rules.
replies(2): >>40430181 #>>40431190 #
1. byteknight ◴[] No.40430181[source]
Exactly. You can't reason with that you do not currently posses.
replies(2): >>40430491 #>>40431301 #
2. verdverm ◴[] No.40430491[source]
How does scientific progress happen without reasoning about that which we do not know or understand?
replies(1): >>40432026 #
3. HarHarVeryFunny ◴[] No.40431301[source]
Sure, but you (a person, not an LLM) can also reason about what you don't possess, which is one of our primary learning mechanisms - curiosity driven by lack of knowledge causing us to explore and acquire new knowledge by physical and/or mental exploration.

An LLM has no innate traits such as curiosity or boredom to trigger exploration, and anyways no online/incremental learning mechanism to benefit from it even if it did.

4. byteknight ◴[] No.40432026[source]
That's building upon current knowledge. That is a different application.