←back to thread

168 points 1wheel | 2 comments | | HN request time: 0.54s | source
Show context
optimalsolver ◴[] No.40429473[source]
>what the model is "thinking" before writing its response

An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.

An LLM is doing exactly nothing while waiting for the next prompt.

replies(5): >>40429486 #>>40429493 #>>40429606 #>>40429761 #>>40429847 #
fassssst ◴[] No.40429486[source]
Why does the timing of the “thinking” matter?
replies(1): >>40429660 #
verdverm ◴[] No.40429660[source]
thinking is generally considered an internal process, without input/output (of tokens), though some people decide to output some of that thinking into a more permanent form

I see thinking as less about "timing" and more about a "process"

What this post seems to be describing is more about where attention is paid and what neurons fire for various stimuli

replies(1): >>40429695 #
sabrina_ramonov ◴[] No.40429695[source]
we know so little about thinking and consciousness, these claims seem premature
replies(1): >>40429816 #
1. verdverm ◴[] No.40429816[source]
That one can fix the RNG and get consistent output indicates a lack of dynamics

They certainly do not self update the weights in an online process as needed information is experienced

replies(1): >>40429919 #
2. whimsicalism ◴[] No.40429919[source]
If we could perfectly simulate the brain and there were quantum hidden variables, we too could “fix RNG and get deterministic output”