Most active commenters
  • rockskon(5)
  • dontlikeyoueith(4)

←back to thread

323 points steerlabs | 18 comments | | HN request time: 0s | source | bottom
Show context
keiferski ◴[] No.46192154[source]
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

replies(37): >>46192230 #>>46192268 #>>46192346 #>>46192427 #>>46192525 #>>46192574 #>>46192631 #>>46192754 #>>46192800 #>>46192900 #>>46193063 #>>46193161 #>>46193374 #>>46193376 #>>46193470 #>>46193656 #>>46193908 #>>46194231 #>>46194299 #>>46194388 #>>46194411 #>>46194483 #>>46194761 #>>46195048 #>>46195085 #>>46195309 #>>46195615 #>>46195656 #>>46195759 #>>46195794 #>>46195918 #>>46195981 #>>46196365 #>>46196372 #>>46196588 #>>46197200 #>>46198030 #
ryandrake ◴[] No.46193656[source]
LLMs all behave as if they are semi-competent (yet eager, ambitious, and career-minded) interns or administrative assistants, working for a powerful CEO-founder. All sycophancy, confidence and positive energy. "You're absolutely right!" "Here's the answer you are looking for!" "Let me do that for you immediately!" "Here is everything I know about what you just mentioned." Never admitting a mistake unless you directly point it out, and then all sorry-this and apologize-that and "here's the actual answer!" It's exactly the kind of personality you always see bubbling up into the orbit of a rich and powerful tech CEO.

No surprise that these products are all dreamt up by powerful tech CEOs who are used to all of their human interactions being with servile people-pleasers. I bet each and every one of them are subtly or overtly shaped by feedback from executives about how they should respond to conversation.

replies(12): >>46193679 #>>46193872 #>>46193884 #>>46194322 #>>46195018 #>>46195066 #>>46195075 #>>46195385 #>>46196040 #>>46196762 #>>46196779 #>>46213184 #
1. rockskon ◴[] No.46195018[source]
Analogies of LLMs to humans obfuscates the problem. LLMs aren't like humans of any sort in any context. They're chat bots. They do not "think" like humans and applying human-like logic to them does not work.
replies(2): >>46195768 #>>46195803 #
2. not2b ◴[] No.46195768[source]
You're right, mostly, but the fact remains that the behavior we see is produced by training, and the training is driven by companies run by execs who like this kind of sycophancy. So it's certainly a factor. Humans are producing them, humans are deciding when the new model is good enough for release.
replies(2): >>46195832 #>>46196193 #
3. Retric ◴[] No.46195803[source]
It’s not about thinking, it’s about what they are trained to do. You could train a LLM to always respond to every prompt by repeating the prompt in Spanish, but that’s not the desired behavior.
4. rockskon ◴[] No.46195832[source]
Do you honestly think an executive wanted a chat bot that confidently lies?
replies(6): >>46195991 #>>46196000 #>>46196057 #>>46196171 #>>46196612 #>>46197157 #
5. not2b ◴[] No.46195991{3}[source]
No, but they like the sycophancy.
6. dontlikeyoueith ◴[] No.46196000{3}[source]
In practice, yes, though they wouldn't think of it that way because that's the kind of people they surround themselves with, so it's what they think human interaction is actually like.
replies(1): >>46196187 #
7. jacquesm ◴[] No.46196057{3}[source]
Given the matrix 'competent/incompetent' / 'sycophant/critic' I would not take it as read that the 'incompetent/sycophant' quadrant would have no adherents, and I would not be surprised if it was the dominant one.
8. mrguyorama ◴[] No.46196171{3}[source]
People with immense wealth, connections, influence, and power demonstrably struggle to not surround themselves with people who only say what the powerful person already wants to hear regardless of reality.

Putin didn't think Russia could take Ukraine in 3 days with literal celebration by the populace because he only works with honest folks for example.

Rich people get disconnected from reality because people who insist on speaking truth and reality around them tend to stop getting invited to the influence peddling sessions.

9. rockskon ◴[] No.46196187{4}[source]
"I want a chat bot that's just as reliable at Steve! Sure he doesn't get it right all the time and he cost us the Black+Decker contract, but he's so confident!"

You're right! This is exactly what an executive wants to base the future of their business off of!

replies(2): >>46198839 #>>46200303 #
10. ◴[] No.46196193[source]
11. jandrese ◴[] No.46196612{3}[source]
Do the lies look really good in a demo when you're pitching it to investors? Are they obscure enough that they aren't going to stand out? If so no problem.
12. ryandrake ◴[] No.46197157{3}[source]
They may say they don't want to be lied to, but the incentives they put in place often inevitably result in them being surrounded by lying yes-men. We've all worked for someone where we were warned to never give them bad news, or you're done for. So everyone just lies to them and tells them everything is on track. The Emperor's New Clothes[1].

1: https://en.wikipedia.org/wiki/The_Emperor%27s_New_Clothes

13. Retric ◴[] No.46198839{5}[source]
You say that like it’s untrue, but they measurably prefer a lying but confident salesman over one who doesn’t act with that kind of confidence.

This is very slightly more rational than it seems because repeating or acting on a lie gives you cover.

14. dontlikeyoueith ◴[] No.46200303{5}[source]
Yes, that is in fact their revealed preference.

Did you have a point?

replies(1): >>46201683 #
15. rockskon ◴[] No.46201683{6}[source]
You use unfalsifiable logic. And you seem to argue that, given the choice, CEOs would prefer not to maximize revenue in favor of... what, affection for an imaginary intern?
replies(1): >>46209512 #
16. dontlikeyoueith ◴[] No.46209512{7}[source]
Cute straw man.

You must be a CEO.

I'm not arguing anything. I'm observing reality. You're the one who is desperate to rationalize it.

replies(1): >>46209818 #
17. rockskon ◴[] No.46209818{8}[source]
You are declaring your imagined logic as fact. Since I do not agree with the basis upon which you pin your argument on, there is no further point in discussion.
replies(1): >>46213129 #
18. dontlikeyoueith ◴[] No.46213129{9}[source]
You're hallucinating things I did not say.