←back to thread

108 points bertman | 2 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
smithkl42 ◴[] No.43822436[source]
The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia. But it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness. And theories fall into the category of reason, not of consciousness. Or another way of putting it that you might find more acceptable: maybe a computer will never, internally, know that it has developed a theory - but it sure seems like it will be able to act and talk as if it had, much like a philosophical zombie.
replies(5): >>43822632 #>>43822859 #>>43822914 #>>43823153 #>>43853526 #
1. ryandv ◴[] No.43822859{4}[source]
> The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia.

To be as accurate as possible with respect to the primary source [0], the Chinese room thought experiment was devised as a refutation of "strong AI," or the position that

    the appropriately programmed computer really is a mind, in the
    sense that computers given the right programs can be literally
    said to understand and have other cognitive states.
Searle's position?

    Rather, whatever purely formal principles you put into the
    computer, they will not be sufficient for understanding, since
    a human will be able to follow the formal principles without
    understanding anything. [...] I will argue that in the literal
    sense the programmed computer understands what the car and the
    adding machine understand, namely, exactly nothing.
[0] https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...
replies(1): >>43853548 #
2. musicale ◴[] No.43853548[source]
Nilsson's complaint that Searle is conflating the running program with the underlying system/interpreter that runs it seems accurate.