←back to thread

108 points bertman | 1 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
TeMPOraL ◴[] No.43822082[source]
Wait, isn't the conclusion to take from the "Chinese room" literally the opposite of what you suggest? I.e. it's the most basic, go-to example of a larger system showing capability (here, understanding Chinese) that is not present in any of its constituent parts individually.

> Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

That's very much unclear at this point. We don't fully understand how we relate words to concepts and meaning ourselves, but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer.

replies(4): >>43822153 #>>43822155 #>>43822821 #>>43830055 #
vacuity ◴[] No.43822153[source]
The Chinese room experiment was originally intended by Searle to (IIUC) do as you claim and justify computers as being capable of understanding like humans do. Since then, it has been used both in this pro-computer, "black box" sense and in the anti-computer, "white box" sense. Personally, I think both are relevant, and the issue with LLMs currently is not a theoretical failing but rather that they aren't convincing when viewed as black boxes (e.g. the Turing test fails).
replies(1): >>43822877 #
MarkusQ ◴[] No.43822877[source]
No, it was used to argue that computers could pass the Turing test and still _not_ understand anything. It was a reducto intended to dispute exactly the claim you are ascribing to it, and argue _against_ "computers as being capable of understanding like humans do".
replies(1): >>43823881 #
1. vacuity ◴[] No.43823881{3}[source]
Thanks. I stand corrected. I guess I should also add that, aside from the black box view, there are pro-computer stances that claim there is mentality and intentionality.