←back to thread

628 points cratermoon | 2 comments | | HN request time: 0.418s | source
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
62702b077f3 ◴[] No.44461426[source]
> The garbage generator generates garbage, but if you run it enough times it gets something slightly-less-garbage that can satisfy a compiler! You're stupid if you don't think this is awesome!
replies(4): >>44461513 #>>44461517 #>>44461643 #>>44462226 #
Shorel ◴[] No.44462226[source]
You are right about this.

Also, someone mathematically proved that's enough. And then someone else proved it empirically.

There was an experiment where they trained 16 pigeons to detect cancerous or benign tumours from photographies.

Individually, each pigeon had an average 85% accuracy. But all pigeons (except for one outlier) together had an accuracy of 99%.

If you add enough silly brains, you get one super smart brain.

replies(1): >>44462264 #
1. Lariscus ◴[] No.44462264[source]
Its also mathematically proven that infinite monkeys typing on typewriters for eternity will recreate all works of Shakespeare. It still takes someone with an actual brain to recognize the correct output.
replies(1): >>44462302 #
2. Shorel ◴[] No.44462302[source]
Yep, there's some positive feedback loop missing in all these LLMs stuff.