←back to thread

169 points mattmarcus | 7 comments | | HN request time: 0s | source | bottom
Show context
EncomLab ◴[] No.43612568[source]
This is like claiming a photorestor controlled night light "understands when it is dark" or that a bimetallic strip thermostat "understands temperature". You can say those words, and it's syntactically correct but entirely incorrect semantically.
replies(6): >>43612607 #>>43612629 #>>43612689 #>>43612691 #>>43612764 #>>43612767 #
robotresearcher ◴[] No.43612689[source]
You declare this very plainly without evidence or argument, but this is an age-old controversial issue. It’s not self-evident to everyone, including philosophers.
replies(2): >>43612771 #>>43613278 #
mubou ◴[] No.43612771[source]
It's not age-old nor is it controversial. LLMs aren't intelligent by any stretch of the imagination. Each word/token is chosen as that which is statistically most likely to follow the previous. There is no capability for understanding in the design of an LLM. It's not a matter of opinion; this just isn't how an LLM works.

Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.

So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.

replies(4): >>43612834 #>>43612933 #>>43613018 #>>43613698 #
gwd ◴[] No.43612933[source]
> Each word/token is chosen as that which is statistically most likely to follow the previous.

The best way to predict the weather is to have a model which approximates the weather. The best way to predict the results of a physics simulation is to have a model which approximates the physical bodies in question. The best way to predict what word a human is going to write next is to have a model that approximates human thought.

replies(1): >>43612992 #
1. mubou ◴[] No.43612992[source]
LLMs don't approximate human thought, though. They approximate language. That's it.

Please, I'm begging you, go read some papers and watch some videos about machine learning and how LLMs actually work. It is not "thinking."

I fully realize neural networks can approximate human thought -- but we are not there yet, and when we do get there, it will be something that is not an LLM, because an LLM is not capable of that -- it's not designed to be.

replies(3): >>43613203 #>>43613420 #>>43615130 #
2. handfuloflight ◴[] No.43613203[source]
Isn't language expressed thought?
replies(1): >>43613306 #
3. fwip ◴[] No.43613306[source]
Language can be a (lossy) serialization of thought, yes. But language is not thought, nor inherently produced by thought. Most people agree that a process randomly producing grammatically correct sentences is not thinking.
4. Sohcahtoa82 ◴[] No.43613420[source]
> it will be something that is not an LLM

I think it will be very similar in architecture.

Artificial neural networks already are approximating how neurons in a brain work, it's just at a scale that's several orders of magnitude smaller.

Our limiting factor for reaching brain-like intelligence via ANN is probably more of a hardware limitation. We would need over 100 TB to store the weights for the neurons, not to mention the ridiculous amount of compute to run it.

replies(2): >>43616357 #>>43617044 #
5. gwd ◴[] No.43615130[source]
> LLMs don't approximate human thought, though. ...Please, I'm begging you, go read some papers and watch some videos about machine learning and how LLMs actually work.

I know how LLMs work; so let me beg you in return, listen to me for a second.

You have a theoretical-only argument: LLMs do text prediction, and therefore it is not possible for them to actually think. And since it's not possible for them to actually think, you don't need to consider any other evidence.

I'm telling you, there's a flaw in your argument: In actuality, the best way to do text prediction is to think. An LLM that could actually think would be able to do text prediction better than an LLM that can't actually think; and the better an LLM is able to approximate human thought, the better its predictions will be. The fact that they're predicting text in no way proves that there's no thinking going on.

Now, that doesn't prove that LLMs actually are thinking; but it does mean that they might be thinking. And so you should think about how you would know if they're actually thinking or not.

6. codedokode ◴[] No.43616357[source]
> not to mention the ridiculous amount of compute to run it.

How does the brain computes the weights then? Or maybe your assumption than brain is equivalent to a mathematical NN is wrong?

7. yahoozoo ◴[] No.43617044[source]
How much compute do you think the human brain uses? They're training these LLMs with (hundreds of) thousands of GPUs.