←back to thread

168 points 1wheel | 5 comments | | HN request time: 1.526s | source
Show context
optimalsolver ◴[] No.40429473[source]
>what the model is "thinking" before writing its response

An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.

An LLM is doing exactly nothing while waiting for the next prompt.

replies(5): >>40429486 #>>40429493 #>>40429606 #>>40429761 #>>40429847 #
whimsicalism ◴[] No.40429761[source]
If we figured out how to freeze and then revive brains, would that mean that all of the revived brains were no longer thinking because they had previously been paused at some point?

Frankly this objection seems very weak

replies(2): >>40429955 #>>40430180 #
1. verdverm ◴[] No.40429955[source]
There are many more features that would be needed, such as a peer comment pointed out, being able to recognize you are saying something incorrect, pausing, and then starting a new stream of output.

This is currently done with multiple LLMs and calls, not within the running of a single model i/o

Another example would be to input a single token or gibberish, the models we have today are more than happy to spit out fantastic numbers of tokens. They really only stop because we look for stop words they are trained to generate and we do the actual stopping action

replies(1): >>40430015 #
2. whimsicalism ◴[] No.40430015[source]
i don’t see why any of the things you’re describing are criteria for thinking, it seems just arbitrarily picking things humans do and saying this is somehow constitutive to thought
replies(1): >>40430054 #
3. verdverm ◴[] No.40430054[source]
It's more to point out how far the LLMs we have today are from anything that ought to be considered thoughts. They are far more mechanical than anything else
replies(1): >>40430104 #
4. whimsicalism ◴[] No.40430104{3}[source]
you’re just retreating into tautologies - my question was why these are the criteria for thought

it’s fine though, this was as productive as i expected

replies(1): >>40430217 #
5. verdverm ◴[] No.40430217{4}[source]
I'm not listing criteria for thought

I'm listing things that current LLMs cannot do (or things they do that thinking entities would not) to argue they are so simple they are far from anything that resembles thinking

> it’s fine though, this was as productive as i expected

A product of your replies becoming lowering in quality, and becoming more argumentative, so I will discontinue now