←back to thread

627 points cratermoon | 8 comments | | HN request time: 0s | source | bottom
Show context
tptacek ◴[] No.44461381[source]
LLM output is crap. It’s just crap. It sucks, and is bad.

Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.

All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.

replies(7): >>44461418 #>>44461426 #>>44461474 #>>44461544 #>>44461933 #>>44461994 #>>44463037 #
1. 62702b077f3 ◴[] No.44461426[source]
> The garbage generator generates garbage, but if you run it enough times it gets something slightly-less-garbage that can satisfy a compiler! You're stupid if you don't think this is awesome!
replies(4): >>44461513 #>>44461517 #>>44461643 #>>44462226 #
2. KPGv2 ◴[] No.44461513[source]
I had Copilot for a hot minute. When I wrote things like serializers and deserializers, it was incredible. So much time saved. But I didn't do it enough to make the personal cost worth it, so I cancelled.

It's annoying to have to hand-code that stuff. But without Copilot I have to. Or I can write some arcane regex and run it on existing code to get 90% of the way there. But writing the regex also takes a while.

Copilot was literally just suggesting the whole deserialization function after I"d finished the serializer, 100% correct code.

replies(1): >>44462291 #
3. ipdashc ◴[] No.44461517[source]
I don't understand the point of this style of argument.

There are oh-so-many issues with LLMs - plagiarism/IP rights, worsening education, unmaintainable code - this should be obvious to anyone. But painting them as totally useless just doesn't make sense. Of course they work. I've had a task I want to do, I ask the LLM in plain English, it gives me code, the code works, I get the task done faster than I would have figuring out the code myself. This process has happened plenty of times.

Which part of this do you disagree with, under your argument? Am I and all the other millions of people who have experienced this all collectively hallucinating (pun intended) that we got working solutions to our problems? Are we just unskilled for not being able to write the code quickly enough ourselves, and should go sod off? I'm joking a bit, but it's a genuine question.

4. literalAardvark ◴[] No.44461643[source]
Counter point: the brain also generates mostly garbage, just slower.
5. Shorel ◴[] No.44462226[source]
You are right about this.

Also, someone mathematically proved that's enough. And then someone else proved it empirically.

There was an experiment where they trained 16 pigeons to detect cancerous or benign tumours from photographies.

Individually, each pigeon had an average 85% accuracy. But all pigeons (except for one outlier) together had an accuracy of 99%.

If you add enough silly brains, you get one super smart brain.

replies(1): >>44462264 #
6. Lariscus ◴[] No.44462264[source]
Its also mathematically proven that infinite monkeys typing on typewriters for eternity will recreate all works of Shakespeare. It still takes someone with an actual brain to recognize the correct output.
replies(1): >>44462302 #
7. Shorel ◴[] No.44462291[source]
I remember writing LISP code that created the serialisers and deserialisers for me.

Now that everything is containerised and managed by docker style environments, I am thinking about giving SBCL another try, the end users only need to access the same JSON REST APIs anyway.

Everything old is new again =)

8. Shorel ◴[] No.44462302{3}[source]
Yep, there's some positive feedback loop missing in all these LLMs stuff.