←back to thread

378 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
xnorswap ◴[] No.44984684[source]
I won't say too much, but I recently had an experience where it was clear that when talking with a colleague, I was getting back chat GPT output. I felt sick, like this just isn't how it should be. I'd rather have been ignored.

It didn't help that the LLM was confidently incorrect.

The smallest things can throw off an LLM, such as a difference in naming between configuration and implementation.

In the human world, you can with legacy stuff get in a situation where "everyone knows" that the foo setting is actually the setting for Frob, but with an LLM it'll happily try to configure Frob or worse, try to implement Foo from scratch.

I'd always rather deal with bad human code than bad LLM code, because you can get into the mind of the person who wrote the bad human code. You can try to understand their misunderstanding. You can reason their faulty reasoning.

With bad LLM code, you're dealing with a soul-crushing machine that cannot (yet) and will not (yet) learn from its mistakes, because it does not believe it makes mistakes ( no matter how apologetic it gets ).

replies(14): >>44984808 #>>44984938 #>>44984944 #>>44984959 #>>44985002 #>>44985018 #>>44985019 #>>44985160 #>>44985639 #>>44985759 #>>44986197 #>>44986656 #>>44987830 #>>44989514 #
HankStallone ◴[] No.44984938[source]
It's annoying when it apologizes for a "misunderstanding" when it was just plain wrong about something. What would be wrong with it just saying, "I was wrong because LLMs are what they are, and sometimes we get things very wrong"?

Kinda funny example: The other day I asked Grok what a "grandparent" comment is on HN. It said it's the "initial comment" in a thread. Not coincidentally, that was the same answer I found in a reddit post that was the first result when I searched for the same thing on DuckDuckGo, but I was pretty sure that was wrong.

So I gave Grok an example: "If A is the initial comment, and B is a reply to A, and C a reply to B, and D a reply to C, and E a reply to D, which is the grandparent of C?" Then it got it right without any trouble. So then I asked: But you just said it's the initial comment, which is A. What's the deal? And then it went into the usual song and dance about how it misunderstood and was super-sorry, and then ran through the whole explanation again of how it's really C and I was very smart for catching that.

I'd rather it just said, "Oops, I got it wrong the first time because I crapped out the first thing that matched in my training data, and that happened to be bad data. That's just how I work; don't take anything for granted."

replies(2): >>44985543 #>>44986512 #
redshirtrob ◴[] No.44985543[source]
Ummm, are you saying that C is the grandparent of C, or do you have a typo in your example? Sure, the initial comment is not necessarily the grandparent, but in your ABCDE example, A is the grandparent of C, and C is the grandparent of E.

Maybe I'm just misreading your comment, but it has me confused enough to reset my password, login, and make this child comment.

replies(1): >>44986399 #
1. HankStallone ◴[] No.44986399{3}[source]
Yes, it was a typo; I meant to say I asked it the grandparent of E. Thanks for catching that.