←back to thread

628 points cratermoon | 2 comments | | HN request time: 0s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
beckthompson ◴[] No.44461418[source]
AIs still frequently make up stuff up - there isn't really a way to get out of that. Have they improved a lot in the last six months? 100%! But they still make mistakes its quite common
replies(2): >>44461516 #>>44461572 #
tptacek ◴[] No.44461516[source]
LLM calls make stuff up. Your compiler can't make things up. An agent iterates LLM calls. When your LLM call makes an API up, your compiler will generate errors. The errors get fed back into the iterative loop. In pretty much ever real case, the LLM corrects, but either way: the result is clear. The code may be wrong, but it shouldn't hallucinate entire APIs.
replies(2): >>44461543 #>>44461587 #
1. loire280 ◴[] No.44461587[source]
A great solution to this problem, but it doesn't seem like this approach will generalize to problems in other fields, or even to more suble coding confabulations that can't be detected by the compiler or static analysis.
replies(1): >>44461608 #
2. tptacek ◴[] No.44461608[source]
I vehemently agree with this. But it doesn't change the falsity of the claim in the article.