←back to thread

169 points mattmarcus | 2 comments | | HN request time: 0s | source
Show context
EncomLab ◴[] No.43612568[source]
This is like claiming a photorestor controlled night light "understands when it is dark" or that a bimetallic strip thermostat "understands temperature". You can say those words, and it's syntactically correct but entirely incorrect semantically.
replies(6): >>43612607 #>>43612629 #>>43612689 #>>43612691 #>>43612764 #>>43612767 #
robotresearcher ◴[] No.43612689[source]
You declare this very plainly without evidence or argument, but this is an age-old controversial issue. It’s not self-evident to everyone, including philosophers.
replies(2): >>43612771 #>>43613278 #
mubou ◴[] No.43612771[source]
It's not age-old nor is it controversial. LLMs aren't intelligent by any stretch of the imagination. Each word/token is chosen as that which is statistically most likely to follow the previous. There is no capability for understanding in the design of an LLM. It's not a matter of opinion; this just isn't how an LLM works.

Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.

So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.

replies(4): >>43612834 #>>43612933 #>>43613018 #>>43613698 #
aSanchezStern ◴[] No.43612834[source]
Many people don't think we have any good evidence that our brains aren't essentially the same thing: a stochastic statistical model that produces outputs based on inputs.
replies(5): >>43612929 #>>43612962 #>>43612972 #>>43613204 #>>43614346 #
SJC_Hacker ◴[] No.43612929[source]
Thats probably the case 99% of the time.

But that 1% is pretty important.

For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.

Here's one by blackandredpenn where ChatGPT insisted the solution to problem that could be solved by high school / talented middle school students was correct, even after trying to convince it it was wrong. https://youtu.be/V0jhP7giYVY?si=sDE2a4w7WpNwp6zU&t=837

Rewind earlier to see the real answer

replies(2): >>43613366 #>>43613956 #
1. jquery ◴[] No.43613956{3}[source]
ChatGPT o1 pro mode solved it on the first try, after 8 minutes and 53 seconds of “thinking”:

https://chatgpt.com/share/67f40cd2-d088-8008-acd5-fe9a9784f3...

replies(1): >>43614474 #
2. SJC_Hacker ◴[] No.43614474[source]
The problem is how do you know that its correct ...

A human would probably say "I don't know how to solve the problem". But ChatGPT free version is confidentially wrong ..