←back to thread

378 points todsacerdoti | 1 comments | | HN request time: 0.214s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
1. duxup ◴[] No.44986656[source]
Agreed on bad human code > bad llm code.

Bad human code to me is at least more understandable in what it was trying to do. There's a goal you can figure out and fix it. It generally operates within the context of larger code to some extant.

Bad LLM code can be broken from start to finish in ways that make zero sense. Even worse when it re-invents the wheel and replaces massive amounts of code. Human aren't likely just make up a function or methods that don't exist and deploy it. That's not the best example as you'd likely find that out fast, but it's the kind of screw up that indicates the entire chunk of LLM code you're examining may in fact be fundamentally flawed beyond normal experience. In some cases you almost need to re-learn the entire codebase to truly realize "oh this is THAT bad and none of this code is of any value".