←back to thread

323 points steerlabs | 7 comments | | HN request time: 0.004s | source | bottom
Show context
keiferski ◴[] No.46192154[source]
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

replies(37): >>46192230 #>>46192268 #>>46192346 #>>46192427 #>>46192525 #>>46192574 #>>46192631 #>>46192754 #>>46192800 #>>46192900 #>>46193063 #>>46193161 #>>46193374 #>>46193376 #>>46193470 #>>46193656 #>>46193908 #>>46194231 #>>46194299 #>>46194388 #>>46194411 #>>46194483 #>>46194761 #>>46195048 #>>46195085 #>>46195309 #>>46195615 #>>46195656 #>>46195759 #>>46195794 #>>46195918 #>>46195981 #>>46196365 #>>46196372 #>>46196588 #>>46197200 #>>46198030 #
rafamct ◴[] No.46192230[source]
Yes you're totally right! I misunderstood what you meant, let me write six more paragraphs based on a similar misunderstanding rather than just trying to get clarification from you
replies(2): >>46192480 #>>46193028 #
1. wlesieutre ◴[] No.46192480[source]
My favorite is when it bounces back and forth between the same two wrong answers, each time admitting that the most recent answer is wrong and going back to the previous wrong answer.

Doesn't matter if you tell it "that's not correct and neither is ____ so don't try that instead," it likes those two answers and it's going to keep using them.

replies(3): >>46192942 #>>46193053 #>>46194954 #
2. ◴[] No.46192942[source]
3. BubbleRings ◴[] No.46193053[source]
Ha! Just experienced this. It was very frustrating.
replies(1): >>46194193 #
4. amelius ◴[] No.46194193[source]
They really need to add a "punish the LLM" button.
replies(1): >>46195675 #
5. heavyset_go ◴[] No.46194954[source]
The false info baked into its context at that point in the conversation and it will get stuck in a local minima trying to generate a response to the given context.
6. danuker ◴[] No.46195675{3}[source]
Some services have the down thumb
replies(1): >>46205900 #
7. amelius ◴[] No.46205900{4}[source]
I need something stronger than that.