←back to thread

108 points bertman | 1 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
jimbokun ◴[] No.43824251{3}[source]
The flaw of the Chinese Room argument is the need to explain why it does not apply to humans as well.

Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?

If no individual neuron or small group of neurons understand Chinese, how can you say any brain made of neurons understands Chinese?

replies(1): >>43824705 #
1. ryandv ◴[] No.43824705{4}[source]
> The flaw of the Chinese Room argument is the need to explain why it does not apply to humans as well.

But it does - the thought experiment continues by supposing that I gave a human those lookup tables and instructions on how to use them, instead of having the computer run the procedure. The human doesn't understand the foreign language either, not in the same way a native speaker does.

The point is that no formal procedure or algorithm is sufficient for such a system to have understanding. Even if you memorized all the lookup tables and instructions and executed this procedure entirely in your head, you would still lack understanding.

> Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?

This sounds like a sorites paradox [0]. I don't know how to resolve this, other than to observe that our notions of "understanding" and "thought" and "intelligence" are ill-defined and more heuristic approximations than terms with a precise meaning; hence the tendency of the field of computer science to use thought experiments like Turing's imitation game or Searle's Chinese room as proxies for assessing intelligence, in lieu of being able to treat these terms and ideas more rigorously.

[0] https://plato.stanford.edu/entries/sorites-paradox/