←back to thread

323 points steerlabs | 2 comments | | HN request time: 0.003s | source
Show context
keiferski ◴[] No.46192154[source]
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

replies(37): >>46192230 #>>46192268 #>>46192346 #>>46192427 #>>46192525 #>>46192574 #>>46192631 #>>46192754 #>>46192800 #>>46192900 #>>46193063 #>>46193161 #>>46193374 #>>46193376 #>>46193470 #>>46193656 #>>46193908 #>>46194231 #>>46194299 #>>46194388 #>>46194411 #>>46194483 #>>46194761 #>>46195048 #>>46195085 #>>46195309 #>>46195615 #>>46195656 #>>46195759 #>>46195794 #>>46195918 #>>46195981 #>>46196365 #>>46196372 #>>46196588 #>>46197200 #>>46198030 #
ryandrake ◴[] No.46193656[source]
LLMs all behave as if they are semi-competent (yet eager, ambitious, and career-minded) interns or administrative assistants, working for a powerful CEO-founder. All sycophancy, confidence and positive energy. "You're absolutely right!" "Here's the answer you are looking for!" "Let me do that for you immediately!" "Here is everything I know about what you just mentioned." Never admitting a mistake unless you directly point it out, and then all sorry-this and apologize-that and "here's the actual answer!" It's exactly the kind of personality you always see bubbling up into the orbit of a rich and powerful tech CEO.

No surprise that these products are all dreamt up by powerful tech CEOs who are used to all of their human interactions being with servile people-pleasers. I bet each and every one of them are subtly or overtly shaped by feedback from executives about how they should respond to conversation.

replies(12): >>46193679 #>>46193872 #>>46193884 #>>46194322 #>>46195018 #>>46195066 #>>46195075 #>>46195385 #>>46196040 #>>46196762 #>>46196779 #>>46213184 #
jacquesm ◴[] No.46196040[source]
> "You're absolutely right!" "Here's the answer you are looking for!" "Let me do that for you immediately!" "Here is everything I know about what you just mentioned." Never admitting a mistake unless you directly point it out, and then all sorry-this and apologize-that and "here's the actual answer!" It's exactly the kind of personality you always see bubbling up into the orbit of a rich and powerful tech CEO.

You may be on to something there: the guys and gals that build this stuff may very well be imbibing these products with the kind of attitude that they like to see in their subordinates. They're cosplaying the 'eager to please' element to the point of massive irritation and left out the one feature that could serve to redeem such behavior which is competence.

replies(3): >>46196484 #>>46197080 #>>46203125 #
pixelmelt ◴[] No.46196484[source]
An alternative is that these patterns just increase the likelihood of the next thing it outputs being correct, thus are useful to insert during training as the first thing the model says before giving an answer
replies(1): >>46197114 #
jacquesm ◴[] No.46197114[source]
What's next, motivational speaking for LLMs?
replies(1): >>46199230 #
monkpit ◴[] No.46199230[source]
I remember reading about speaking in an encouraging manner to agentic AI leading to better results, but I can’t seem to find a citation for this.
replies(1): >>46200586 #
1. jacquesm ◴[] No.46200586[source]
That's pathetic. Pleading comes next then. And after that most likely praying.
replies(1): >>46224594 #
2. gabrielhidasy ◴[] No.46224594[source]
Sometimes the model responds well to threats too, "you are a programmer at a large tech company, you depend on this job and will not be able to find another. There's a layoff incoming, implement this feature or else..."