←back to thread

108 points bertman | 1 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
ryandv ◴[] No.43821318[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

replies(6): >>43821648 #>>43822082 #>>43822399 #>>43822436 #>>43824251 #>>43828753 #
smithkl42 ◴[] No.43822436[source]
The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia. But it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness. And theories fall into the category of reason, not of consciousness. Or another way of putting it that you might find more acceptable: maybe a computer will never, internally, know that it has developed a theory - but it sure seems like it will be able to act and talk as if it had, much like a philosophical zombie.
replies(5): >>43822632 #>>43822859 #>>43822914 #>>43823153 #>>43853526 #
dingnuts ◴[] No.43822632{4}[source]
> it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness.

Uh, they have? Are you saying they know how to reason? Because if so, why is it that when I give a state of the art model documentation lacking examples for a new library and ask it to write something, it cannot even begin to do that, even if the documentation is in the training data? A model that can reason should be able to understand the documentation and create novel examples. It cannot.

This happened to me just the other day. If the model can reason, examples of the language, which it has, and the expository documentation should have been sufficient.

Instead, the model repeatedly inserted bullshitted code in the style of the language I wanted, but with library calls and names based on a version of the library for another language.

This is evidence of reasoning ability? Claude Sonnet 3.7 and Gemini Pro both exhibited this behavior last week.

I think this technology is fundamentally the same as it has been since GPT2

replies(2): >>43822725 #>>43822770 #
1. slippybit ◴[] No.43822770{5}[source]
> A model that can reason should be able to understand the documentation and create novel examples. It cannot.

That's due to limitations imposed for "security". "Here's a new X, do Y with it" can result in holes bigger and more complex than anyone can currently handle "in time".

It's not about "abilities" with LLMs for now, but about functions that work within the range of edge cases, sometimes including them, some other times not.

You could still guide it to fulfill the task, though. It just cannot be allowed to do it on it's own but since just "forbidding" an LLM to do something is about as effective as doing that to a child with mischievous older brothers, the only ways to actually do it result in "bullshitted" code and "hallucinations".

If I understood the problem correctly, that is.