←back to thread

627 points cratermoon | 2 comments | | HN request time: 0.433s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
ipdashc ◴[] No.44461474[source]
There was an article that made the rounds a few weeks ago that still rings true. Basically, it feels like one is going crazy reading either "extreme" of the whole LLM conversation, with one extreme (obviously) being the "AI can do anything" Twitter techbro types, but the other extreme being articles like this that claim it can't do anything.

I know the author already addressed this, literally calling out HN by name, but I just don't get it. You don't even need agents (though I'm sure they help), I still just use regular ChatGPT or Copilot or whatever and it's still occasionally useful. You type in what you want it to do, it gives you code, and usually the code works. Can we appreciate how insane this would have been, what, half a decade ago? Are our standards literally "the magic english-to-code machine doesn't work 100% of the time, so it's total crap, utterly useless"?

I absolutely agree with the general thrust of the article, the overall sense of disillusionment, the impact LLM abuse is going to have on education, etc. I don't even particularly like LLMs. But it really does feel like gaslighting to the extent that when these essays make this sort of argument (LLMs being entirely useless for coding) it just makes me take them less seriously.

replies(1): >>44461503 #
1. paulddraper ◴[] No.44461503[source]
> it just makes me take them less seriously

Indeed. This is how to spot an ideologue with an axe to grind, not someone whose beliefs are shaped by dispassionate observation.

replies(1): >>44461749 #
2. happytoexplain ◴[] No.44461749[source]
Aside from trivial facts, beliefs can not be, and should not be, shaped by dispassionate observation alone. Even yours are not. And framing it the way you have is simply the same but oppositely-positioned fallacy as the one the author is accused of.