←back to thread

108 points bertman | 2 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
1. looofooo0 ◴[] No.43822399[source]
But the LLM interacts with the program and the world through debugger, run-time feedback, linter, fuzzer etc., we can collect all the user feedback, user pattern ... Moreover, it can also get visual feedback. Reason through other programs like physic simulation etc. Use a robot to interact with the device running the code physically. Can use proof verifier like lean, to ensure its logical model of the program is sound. Do some back and forth between the logical model and the actual program through experiments. Maybe not now, but I don't see why the LLM needs to be kept in the Chinese Room.
replies(1): >>43824262 #
2. jimbokun ◴[] No.43824262[source]
That's true in general but not true of any current LLM, to my knowledge. Different subsets of those inputs and modalities, yes. But no current LLM has access to all of them.