←back to thread

169 points mattmarcus | 9 comments | | HN request time: 0.001s | source | bottom
Show context
EncomLab ◴[] No.43612568[source]
This is like claiming a photorestor controlled night light "understands when it is dark" or that a bimetallic strip thermostat "understands temperature". You can say those words, and it's syntactically correct but entirely incorrect semantically.
replies(6): >>43612607 #>>43612629 #>>43612689 #>>43612691 #>>43612764 #>>43612767 #
robotresearcher ◴[] No.43612689[source]
You declare this very plainly without evidence or argument, but this is an age-old controversial issue. It’s not self-evident to everyone, including philosophers.
replies(2): >>43612771 #>>43613278 #
mubou ◴[] No.43612771[source]
It's not age-old nor is it controversial. LLMs aren't intelligent by any stretch of the imagination. Each word/token is chosen as that which is statistically most likely to follow the previous. There is no capability for understanding in the design of an LLM. It's not a matter of opinion; this just isn't how an LLM works.

Any comparison to the human brain is missing the point that an LLM only simulates one small part, and that's notably not the frontal lobe. That's required for intelligence, reasoning, self-awareness, etc.

So, no, it's not a question of philosophy. For an AI to enter that realm, it would need to be more than just an LLM with some bells and whistles; an LLM plus something else, perhaps, something fundamentally different which does not yet currently exist.

replies(4): >>43612834 #>>43612933 #>>43613018 #>>43613698 #
1. aSanchezStern ◴[] No.43612834[source]
Many people don't think we have any good evidence that our brains aren't essentially the same thing: a stochastic statistical model that produces outputs based on inputs.
replies(5): >>43612929 #>>43612962 #>>43612972 #>>43613204 #>>43614346 #
2. SJC_Hacker ◴[] No.43612929[source]
Thats probably the case 99% of the time.

But that 1% is pretty important.

For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.

Here's one by blackandredpenn where ChatGPT insisted the solution to problem that could be solved by high school / talented middle school students was correct, even after trying to convince it it was wrong. https://youtu.be/V0jhP7giYVY?si=sDE2a4w7WpNwp6zU&t=837

Rewind earlier to see the real answer

replies(2): >>43613366 #>>43613956 #
3. mubou ◴[] No.43612962[source]
Of course, you're right. Neural networks mimic exactly that after all. I'm certain we'll see an ML model developed someday that fully mimics the human brain. But my point is an LLM isn't that; it's a language model only. I know it can seem intelligent sometimes, but it's important to understand what it's actually doing and not ascribe feelings to it that don't exist in reality.

Too many people these days are forgetting this key point and putting a dangerous amount of faith in ChatGPT etc. as a result. I've seen DOCTORS using ChatGPT for diagnosis. Ignorance is scary.

4. nativeit ◴[] No.43612972[source]
Care to share any of this good evidence?
5. goatlover ◴[] No.43613204[source]
Do biologists and neuroscientists not have any good evidence or is that just computer scientists and engineers speaking outside of their field of expertise? There's always been this danger of taking computer and brain comparisons too literally.
6. LordDragonfang ◴[] No.43613366[source]
> For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.

I know plenty of teachers who would describe their students the exact same way. The difference is mostly one of magnitude (of delta in competence), not quality.

Also, I think it's important to note that by "could be solved by high school / talented middle school students" you mean "specifically designed to challenge the top ~1% of them". Because if you say "LLMs only manage to beat 99% of middle schoolers at math", the claim seems a whole lot different.

7. jquery ◴[] No.43613956[source]
ChatGPT o1 pro mode solved it on the first try, after 8 minutes and 53 seconds of “thinking”:

https://chatgpt.com/share/67f40cd2-d088-8008-acd5-fe9a9784f3...

replies(1): >>43614474 #
8. root_axis ◴[] No.43614346[source]
If you're willing to torture the analogy you can find a way to describe literally anything as a system of outputs based on inputs. In the case of the brain to LLM comparison, people are inclined to do it because they're eager to anthropomorophize something that produces text they can interpret as a speaker, but it's totally incorrect to suggest that our brains are "essentially the same thing" as LLMs. The comparison is specious even on a surface level. It's like saying that birds and planes are "essentially the same thing" because flight was achieved by modeling planes after birds.
9. SJC_Hacker ◴[] No.43614474{3}[source]
The problem is how do you know that its correct ...

A human would probably say "I don't know how to solve the problem". But ChatGPT free version is confidentially wrong ..