←back to thread

108 points bertman | 1 comments | | HN request time: 0.36s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
smithkl42 ◴[] No.43822436[source]
The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia. But it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness. And theories fall into the category of reason, not of consciousness. Or another way of putting it that you might find more acceptable: maybe a computer will never, internally, know that it has developed a theory - but it sure seems like it will be able to act and talk as if it had, much like a philosophical zombie.
replies(5): >>43822632 #>>43822859 #>>43822914 #>>43823153 #>>43853526 #
1. slippybit ◴[] No.43822914[source]
> maybe a computer will never, internally, know that it has developed a theory

Happens to people all the time :) ... especially if they don't have a concept of theories and hypotheses.

People are dumb and uneducated only until they aren't anymore, which is, even in the worst cases, no more than a decade of effort put in time. In fact, we don't even know how crazy fast neuro-genesis and or cognitive abilities might increase when a previously dense person reaches or "breaks through" a certain plateau. I'm sure there is research, but this is not something a satisfyingly precise enough answer can be formulated for.

If I formulate a new hypothesis, the LLM can tell me, "nope, you are the only idiot believing this path is worth pursuing". And if I go ahead, the LLM can tell me: "that's not how this usually works, you know", "professionals do it this way", "this is not a proof", "this is not a logical link", "this is nonsense but I commend your creativity!", all the way until the actual aha-moment when everything fits together and we have an actual working theory ... in theory.

We can then analyze the "knowledge graph" in 4D and the LLM could learn a theory of what it's like to have a potential theory even though there is absolutely nothing that supports the hypothesis or it's constituent links at the moment of "conception".

Stay put, it will happen.