←back to thread

168 points 1wheel | 1 comments | | HN request time: 0.234s | source
Show context
optimalsolver ◴[] No.40429473[source]
>what the model is "thinking" before writing its response

An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.

An LLM is doing exactly nothing while waiting for the next prompt.

replies(5): >>40429486 #>>40429493 #>>40429606 #>>40429761 #>>40429847 #
whimsicalism ◴[] No.40429761[source]
If we figured out how to freeze and then revive brains, would that mean that all of the revived brains were no longer thinking because they had previously been paused at some point?

Frankly this objection seems very weak

replies(2): >>40429955 #>>40430180 #
1. abecedarius ◴[] No.40430180[source]
Yeah. I've had like three conversations with people who said LLMs don't "think", implied this was too obvious to need to say why, and when pressed on it brought up the pausing as their first justification.

It's an interesting window on people's intuitions -- this pattern felt surprising and alien now to someone who imbibed Hofstadter and Dennett, etc., as a teen in the 80s.

(TBC, the surprise was not that people weren't sure they "think" or are "conscious", it's that they were sure they aren't, on this basis that the program is not running continually.)