←back to thread

108 points bertman | 1 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
CamperBob2 ◴[] No.43821648[source]
You're seriously still going to invoke the Chinese Room argument after what we've seen lately? Wow.

The computer understands Chinese better than Searle (or anyone else) understood the nature and functionality of language.

replies(1): >>43821684 #
ryandv ◴[] No.43821684[source]
You're seriously going to invoke this braindead reddit-tier of "argumentation," or rather lack thereof, by claiming bewilderment and offering zero substantive points?

Wow.

replies(1): >>43821955 #
CamperBob2 ◴[] No.43821955[source]
Yes, because the Chinese Room was a weak test the day it was proposed, and it's a heap of smoldering rhetorical wreckage now. It's Searle who failed to offer any substantive points.

How do you know you're not arguing with an LLM at the moment? You don't... any more than I do.

replies(1): >>43821978 #
ryandv ◴[] No.43821978[source]
> How do you know you're not arguing with an LLM at the moment? You don't.

I wish I was right now. It would probably provide at least the semblance of greater insight into these topics.

> the Chinese Room was a weak test the day it was proposed

Why?

replies(2): >>43822163 #>>43822171 #
CamperBob2 ◴[] No.43822163{7}[source]
It would probably provide at least the semblance of greater insight into these topics.

That's very safe to say. You should try it. Then ask yourself how a real Chinese Room would have responded.

Why?

My beef with the argument is that simulating intelligence well enough to get a given job done is indistinguishable from intelligence itself, with respect to the job in question.

More specific arguments along the lines of "Humans can do job X but computers cannot" have not held up well lately, but they were never on solid logical ground. Searle set out to construct such a logical ground, but he obviously failed. If you took today's LLMs back to the 1960s when he proposed that argument, either Searle would be laughed out of town, or you would be burned as a witch.

Arguments along the lines of "Machines can never do X, only humans can do that" never belonged in the scientific literature in the first place, and I think the Chinese Room falls into that class. I believe that any such argument needs to begin by explaining what's special about human thought. Right now, the only thing you can say about human thought that you can't say about AI is that humans have real-time sensory input and can perform long-term memory consolidation.

Those advantages impose real limitations on what current-generation LLM-based technology can do compared to humans, but they sound like temporary ones to me.

replies(1): >>43822409 #
1. Jensson ◴[] No.43822409{8}[source]
> Arguments along the lines of "Machines can never do X, only humans can do that"

That isn't the argument though.

> If you took today's LLMs back to the 1960s when he proposed that argument, either Searle would be laughed out of town, or you would be burned as a witch.

Do you think humans were different in the 1960s? No they would see the same limitations as people point out today. 1960s was when AI optimism was still very high.