←back to thread

378 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
ryandvm ◴[] No.44985759[source]
I had an experience earlier this week that was kind of surreal.

I'm working with a fairly arcane technical spec that I don't really understand so well so I ask Claude to evaluate one of our internal proposals on this spec for conformance. It highlights a bunch of mistakes in our internal proposal.

I send those off to someone in our company that's supposed to be an authority on the arcane spec with the warning that it was LLM generated so it might be nonsense.

He feeds my message to his LLM and asks it to evaluate the criticisms. He then messages me back with the response from his LLM and asks me what I think.

We are functionally administrative assistants for our AIs.

If this is the future of software development, I don't like it.

replies(1): >>44985824 #
xeonmc ◴[] No.44985824[source]
In your specific case, I think it’s likely an intentionally pointed response to your use of LLM.
replies(2): >>44988384 #>>44991435 #
1. jetsnoc ◴[] No.44991435{3}[source]
I'll admit it. I've done this, but only a few times and only when someone sent me truly egregious AI slop—the kind where it's obvious no human that respects my time ever looked at it.

My reaction is usually, "Oh, we're doing this? Fine." I'll even prompt my LLM with something like, "Make it sound as corporate and AI-generated as possible." Or, if I'm feeling especially petty, "Write this like you're trying to win the 2025 award for Most Corporate Nonsense, and you're a committee at a Fortune 500 company competing to generate the most boilerplate possible." It's petty, sure, but there's something oddly cathartic about responding to slop with slop.