←back to thread

627 points cratermoon | 1 comments | | HN request time: 0.415s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
1. d4rkn0d3z ◴[] No.44463037[source]
Isn't this saying "We get it that our nondeterministic bullshit machine writes crap, but we are wrapping it in a deterministic finite state machine in order to bring back determinism. We call it 'Agentic'".

Seems like 40 years of effort making deterministic computing work in a non-deterministic universe is being cast aside because we thought nondeterminism might work better. Turns out, we need determinism after all.

Following this out, we might end up with alternating layers of determinism and nondeterminism each trying to correct the output of the layer below.

I would argue AI is a harder problem than any humans have ever tried to solve, how does it benefit me to make every mundane problem into the hardest problem ever? As they say on the internet ...and now you have two problems, the second of which is always the hardest one ever.