←back to thread

627 points cratermoon | 1 comments | | HN request time: 0.211s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
beckthompson ◴[] No.44461418[source]
AIs still frequently make up stuff up - there isn't really a way to get out of that. Have they improved a lot in the last six months? 100%! But they still make mistakes its quite common
replies(2): >>44461516 #>>44461572 #
csomar ◴[] No.44461572[source]
You can improve on that

1. A type-strict compiler.

2. https://github.com/isaacphi/mcp-language-server

LLMs will always make stuff up because they are lossy. In the same way that if I ask you to list the methods for some random object lib you'd not be able to do that; you use the documentation to pull that up or your code-complete companion. LLMs are just getting the tools for that.

replies(1): >>44461611 #
1. beckthompson ◴[] No.44461611[source]
Oh for sure I agree 100%! I was just saying that they will always make stuff up no matter what. Those are both good fixes but at its core it can only "make stuff up".