An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.
An LLM is doing exactly nothing while waiting for the next prompt.
An actual "thinking machine" would be constantly running computations on its accumulated experience in order to improve its future output and/or further compress its sensory history.
An LLM is doing exactly nothing while waiting for the next prompt.
Frankly this objection seems very weak
This is currently done with multiple LLMs and calls, not within the running of a single model i/o
Another example would be to input a single token or gibberish, the models we have today are more than happy to spit out fantastic numbers of tokens. They really only stop because we look for stop words they are trained to generate and we do the actual stopping action
it’s fine though, this was as productive as i expected
I'm listing things that current LLMs cannot do (or things they do that thinking entities would not) to argue they are so simple they are far from anything that resembles thinking
> it’s fine though, this was as productive as i expected
A product of your replies becoming lowering in quality, and becoming more argumentative, so I will discontinue now