←back to thread

378 points todsacerdoti | 2 comments | | HN request time: 0s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
clickety_clack ◴[] No.44985160[source]
Ugh. I worked with a PM who used AI to generate PRDs. Pretty often, we’d get to a spot where we were like “what do you mean by this” and he’d respond that he didn’t know, the AI wrote it. It’s like he just stopped trying to actually communicate an idea, and replaced it with performative document creation. The effect was to basically push his job of understanding requirements down to me, and I didn’t really want to interact with someone who couldn’t be bothered figuring out his own thoughts before trying to put me to work implementing them so I left the team.
replies(2): >>44985267 #>>44987294 #
1. nradov ◴[] No.44987294[source]
Well that's when you escalate the concern (tactfully and confidentially) to your resource manager and/or the Product Manager's resource manager. And if they don't take corrective action then it's time to look for a new job.
replies(1): >>44988862 #
2. clickety_clack ◴[] No.44988862[source]
If I was stuck there I probably would have pushed it, but I had better options than setting out on an odyssey to reform a product team.

It got me thinking that in general, people with options will probably sort themselves out of those situations and into organizations with like-minded people who use AI as a tool to multiply their impact (and I flatter myself to think that it will be high ability people who have those options), leaving those more reliant on AI to operate at the limit of what they get from OpenAI et al.