←back to thread

169 points mattmarcus | 1 comments | | HN request time: 0s | source
Show context
EncomLab ◴[] No.43612568[source]
This is like claiming a photorestor controlled night light "understands when it is dark" or that a bimetallic strip thermostat "understands temperature". You can say those words, and it's syntactically correct but entirely incorrect semantically.
replies(6): >>43612607 #>>43612629 #>>43612689 #>>43612691 #>>43612764 #>>43612767 #
robotresearcher ◴[] No.43612689[source]
You declare this very plainly without evidence or argument, but this is an age-old controversial issue. It’s not self-evident to everyone, including philosophers.
replies(2): >>43612771 #>>43613278 #
mubou ◴[] No.43612771[source]
It's not age-old nor is it controversial. LLMs aren't intelligent by any stretch of the imagination. Each word/token is chosen as that which is statistically most likely to follow the previous. There is no capability for understanding in the design of an LLM. It's not a matter of opinion; this just isn't how an LLM works.

Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.

So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.

replies(4): >>43612834 #>>43612933 #>>43613018 #>>43613698 #
gwd ◴[] No.43612933[source]
> Each word/token is chosen as that which is statistically most likely to follow the previous.

The best way to predict the weather is to have a model which approximates the weather. The best way to predict the results of a physics simulation is to have a model which approximates the physical bodies in question. The best way to predict what word a human is going to write next is to have a model that approximates human thought.

replies(1): >>43612992 #
mubou ◴[] No.43612992[source]
LLMs don't approximate human thought, though. They approximate language. That's it.

Please, I'm begging you, go read some papers and watch some videos about machine learning and how LLMs actually work. It is not "thinking."

I fully realize neural networks can approximate human thought -- but we are not there yet, and when we do get there, it will be something that is not an LLM, because an LLM is not capable of that -- it's not designed to be.

replies(3): >>43613203 #>>43613420 #>>43615130 #
Sohcahtoa82 ◴[] No.43613420{3}[source]
> it will be something that is not an LLM

I think it will be very similar in architecture.

Artificial neural networks already are approximating how neurons in a brain work, it's just at a scale that's several orders of magnitude smaller.

Our limiting factor for reaching brain-like intelligence via ANN is probably more of a hardware limitation. We would need over 100 TB to store the weights for the neurons, not to mention the ridiculous amount of compute to run it.

replies(2): >>43616357 #>>43617044 #
1. codedokode ◴[] No.43616357{4}[source]
> not to mention the ridiculous amount of compute to run it.

How does the brain computes the weights then? Or maybe your assumption than brain is equivalent to a mathematical NN is wrong?