←back to thread

108 points bertman | 2 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
TeMPOraL ◴[] No.43822082[source]
Wait, isn't the conclusion to take from the "Chinese room" literally the opposite of what you suggest? I.e. it's the most basic, go-to example of a larger system showing capability (here, understanding Chinese) that is not present in any of its constituent parts individually.

> Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

That's very much unclear at this point. We don't fully understand how we relate words to concepts and meaning ourselves, but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer.

replies(4): >>43822153 #>>43822155 #>>43822821 #>>43830055 #
dragonwriter ◴[] No.43822821[source]
The Chinese Room is a mirror that reflects people’s hidden (well, often not very, but still) biases about whether the universe is mechanical or whether understanding involves dualistic metaphysical woo back at them as conclusions.

That's not why it was presented, of course, Searle aimed at proving something, but his use of it just illustrates which side of that divide he was on.

replies(1): >>43826639 #
1. Yizahi ◴[] No.43826639[source]
Do you think gravity force is mechanical or is it a metaphysical woo? Because scientists have no idea how it works precisely, just like our brain and consciousness.

Hint - there are not only these two possibilities you have mentioned.

replies(1): >>43857029 #
2. namaria ◴[] No.43857029[source]
> Do you think gravity force is mechanical or is it a metaphysical woo? Because scientists have no idea how it works precisely, just like our brain and consciousness.

Nonsense. We know exactly how gravity works, with high precision. We don't know why it works.