←back to thread

108 points bertman | 2 comments | | HN request time: 0s | source
Show context
n4r9 ◴[] No.43819695[source]
Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

replies(6): >>43819742 #>>43821151 #>>43821318 #>>43822444 #>>43822489 #>>43824220 #
Jensson ◴[] No.43821151[source]
> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Human theory building works, we have demonstrated this, our science letting us build things on top of things proves it.

LLM theory building so far doesn't, they always veer in a wrong direction after a few steps, you will need to prove that LLM can build theories just like we proved that humans can.

replies(3): >>43821344 #>>43821401 #>>43822289 #
1. dkarl ◴[] No.43822289[source]
The article is about what LLMs can do, and I read it as what they can do in theory, as they're developed further. It's an argument based on principle, not on their current limitations.

You can read it as a claim about what LLMs can do now, but that wouldn't be very interesting, because it's obvious that no current LLM can replace a human programmer.

I think the author contradicts themselves. They argue that LLMs cannot build theories because they fundamentally do not work like humans do, and they conclude that LLMs can't replace human programmers because human programmers need to build theories. But if LLMs fundamentally do not work like humans, how do we know that they need to build theories the same way that humans do?

replies(1): >>43824283 #
2. jimbokun ◴[] No.43824283[source]
> because it's obvious that no current LLM can replace a human programmer.

A lot of managers need to be informed of this.