←back to thread

628 points cratermoon | 1 comments | | HN request time: 0s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
1. lovich ◴[] No.44461933[source]
> agent constructions

> But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

You’re ahead of the curve and wondering why others don’t know what you do. If you’re not an AI company, a faang, or an AI evangelist you likely haven’t heard of those solutions.

I’ve been trying to keep up with AI developments, and only learned about MCP and agentic workflows 1-2 months ago and I consider myself failing at keeping up with cutting edge AI development

Edit:

Also six months ago is Q1 2025, not 2024. Not sure if that was a typo or a need to remind you at how rapidly this technology is iterating