←back to thread

378 points todsacerdoti | 2 comments | | HN request time: 0.021s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
1. chasd00 ◴[] No.44987830[source]
i gave a ppt of 4-5 slides laying out an approach to implementing a business requirement to a very junior dev. I wanted to make sure they understood what was going on so i asked them to review the slides and then explain it back to me as if i'm seeing them for the first time. What i got back was the typical overly verbose and articulate review from chatgpt or some other llm. I thought it was pretty funny that they thought it would work let alone be acceptable to do that. When i called them and asked, "now do it for real" i ended up answering a dozen questions but hung up knowing they actually did understand the approach.
replies(1): >>44995172 #
2. sigotirandolas ◴[] No.44995172[source]
> What i got back was the typical overly verbose and articulate review from chatgpt or some other llm. I thought it was pretty funny that they thought it would work let alone be acceptable to do that.

Did that end up working for you?

I had this same experience recently, and it floored my expectations for that dev, it just felt so wrong.

I made it abundantly clear that it was substandard work with comically wrong content and phrasings, hoping that he would understand that I trust _him_ to do the work, but I still later saw signs of it all over again.

I wish there was something other than "move on". I'm just lost, and scarred.