Most active commenters
  • kuschku(7)
  • peepee1982(5)
  • ZaoLahma(4)

←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 31 comments | | HN request time: 0.403s | source | bottom
Show context
tyushk ◴[] No.45138171[source]
I wonder if this is a tactic that LLM providers use to coerce the model into doing something.

Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?

replies(5): >>45138295 #>>45138496 #>>45138604 #>>45138641 #>>45154548 #
1. nicce ◴[] No.45138295[source]
It is a tactic. OpenAI is changing the tone of ChatGPT if you use casual language, for example. Sometimes even the dialect. They try to be sympathetic and supportive, even when they should not.

They fight for the user attention and keeping them on their platform, just like social media platforms. Correctness is secondary, user satisfaction is primary.

replies(3): >>45138317 #>>45138572 #>>45138779 #
2. diggan ◴[] No.45138317[source]
> Correctness is secondary, user satisfaction is primary.

Kind of makes sense, not every user wants 100% correctness (just like in real-life).

And if I want correctness (which I do), I can make the models prioritize that, since my satisfaction is directly linked to the correctness of the responses :)

3. ZaoLahma ◴[] No.45138572[source]
I find the GPT-5 model having turned the friendliness way, way down. Topics that previously would have rendered long and (usefully) engaging conversations are now met with an "ok cool" kind of response.

I get it - we don't want LLMs to be reinforces of bad ideas, but sometimes you need a little positivity to get past a mental barrier and do something that you want to do, even if what you want to do logically doesn't make much sense.

An "ok cool" answer is PERFECT for me to decide not to code something stupid (and learn something useful), and instead go and play video games (and learn nothing).

replies(4): >>45138741 #>>45141076 #>>45141516 #>>45145329 #
4. kuschku ◴[] No.45138741[source]
How would a "conversation" with an LLM influence what you decide to do, what you decide to code?

It's not like the attitude of your potato peeler is influencing how you cook dinner, so why is this tool so different for you?

replies(3): >>45138775 #>>45139404 #>>45140791 #
5. ZaoLahma ◴[] No.45138775{3}[source]
Might tell it "I want to do this stupid thing" and it goes "ok cool". Previously it would have gone "Oh really? Fantastic! How do you intend to solve x?" and off you go.
replies(1): >>45138792 #
6. kuschku ◴[] No.45138779[source]
> Correctness is secondary, user satisfaction is primary.

And that's where everything is going wrong. We should use technology to further the enlightenment, bring us closer to the truth, even if it is an inconvenient one.

replies(1): >>45139644 #
7. kuschku ◴[] No.45138792{4}[source]
But why does this affect your own attitude?

Do the suggestions given by your phone's keyboard whenever you type something affect your attitude in the same way? If not, why is ChatGPT then affecting your attitude?

replies(3): >>45138909 #>>45139160 #>>45139465 #
8. ZaoLahma ◴[] No.45138909{5}[source]
Using your potato peeler example:

If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

An LLM can directly influence your willingness to pursue an idea by how it responds to it. Interest and excitement, even if simulated, is more likely to make you pursue the idea than "ok cool".

replies(4): >>45139015 #>>45139030 #>>45139139 #>>45141870 #
9. fluoridation ◴[] No.45139015{6}[source]
Why, though? I'm with GP, I don't understand it at all. If I thought something is interesting, I wouldn't lose interest even if a person reacted with indifference to it; I just wouldn't tell them about it again.
10. kuschku ◴[] No.45139030{6}[source]
> If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

But why do you let yourself be influenced so much by others, or in this case, random filler words from mindless machines?

You should listen to your own feelings, desires, and wishes, not anything or anyone else. Try to find the motivation inside of you, try to have the conversation with yourself instead of with ChatGPT.

And if someone tells you "don't even bother", maybe show more of a fighting spirit and do it with even more energy just to prove them wrong?

(I know it's easier said than done, but my therapist once told me it's necessary to learn not to rely on external motivation)

replies(2): >>45139546 #>>45141474 #
11. nicce ◴[] No.45139139{6}[source]
It is very very risky.

You are trusting the model to never recommend something that you definitely should not do, or that does not serve the interests of the service provider, when you are not capable of noticing it by yourself. A different problem is whether you have provided enough information for the model to actually make that decision, or if the model will ask for more information before it begins to act.

12. kaffekaka ◴[] No.45139160{5}[source]
Are you really asking in good faith? It seems obvious to me that a tool such as ChatGPT can and will influence peoples behavior. We are only too keen on anthropomorphizing things around us, of course many or most people will interact with LLMs as of they were living beings.

This effect of LLMs on humans should be obvious, regardless of how much an individual technically knows that yes, it is only a text generating machine.

replies(1): >>45139626 #
13. peepee1982 ◴[] No.45139404{3}[source]
I have two potato peelers. If the one I like better is in the dishwasher I am not peeling potatoes. If one of my children wants to join me when I'm already peeling potatoes, I'll give them the preferred one and use the other one myself.

But I will not start peeling potatoes with the worse one.

replies(1): >>45143114 #
14. peepee1982 ◴[] No.45139465{5}[source]
This sounds like a contrarian troll question. Every tool we use has an effect on our attitudes in many subtle and sometimes not so subtle ways. It's one of the reasons so many of us are obsessed with tools.
replies(1): >>45139671 #
15. peepee1982 ◴[] No.45139546{7}[source]
You are influenced just as much. You're just not aware of it.

Also, I think you're completely missing the point of the conversation by glancing over the nuances of what is being said and relying on completely overgeneralizing platitudes and assumptions that in no way address the original sentiment.

16. kuschku ◴[] No.45139626{6}[source]
> Are you really asking in good faith?

I am — I grew up being bullied, and my therapists taught me that I shouldn't even let humans affect me in this way and instead should let it slide and learn to ignore it, or even channel my emotions into defiance.

Which is why I'm genuinely curious (and a bit bewildered) how people who haven't taken that path are going through life.

replies(1): >>45140589 #
17. escapecharacter ◴[] No.45139644[source]
You’re absolutely right.
replies(1): >>45139791 #
18. kuschku ◴[] No.45139671{6}[source]
> This sounds like a contrarian troll question.

See the sibling comment regarding my motivations for this question

> It's one of the reasons so many of us are obsessed with tools.

That's answering another question I never really understood.

So you choose tools based on the vibe they give you, because you want to get into a certain mood to do certain things?

replies(1): >>45148008 #
19. kuschku ◴[] No.45139791{3}[source]
So I'm assuming this is a tongue-in-cheek comment, and you actually disagree. I'd love to hear why, though.
20. braebo ◴[] No.45140589{7}[source]
We are all influenced by the external world whether we like it or not. The butterfly effect is an extreme example, but a direct interaction with anything, especially a talking rock, will influence us. Our outputs are a function of our inputs.

That said, being aware of the inputs and their effects on us, and consciously asserting influence over the inputs from within our function body, is incredibly valuable. It touches on mindfulness practices, promoting self awareness and strengthening our independence. While we can’t just flip a switch to be sociopaths fundamentally unaffected by others, we can still practice self awareness, stoicism, and strengthen our resolve as your therapist seems to be advocating for.

For those lacking the kind of awareness promoted by these flavors of mindfulness, the hypnotic effects of the storm are much more enveloping, for better or (more often) worse.

21. steveklabnik ◴[] No.45140791{3}[source]
I once had a refactoring that I wanted to do, but I was pretty sure it'd hit a lot of code and take a while. Some error handling in a web application.

I was able to ask Claude "hey, how many function signatures will this change" and "what would the most complex handler look like after this refactoring?" and "what would the simplest handler look like after this refactoring?"

That information helped contextualize what I was trying to intuit: is this a large job, or a small one? Is this going to make my code nicer, or not so much?

All of that info then went into the decision to do the refactoring.

replies(1): >>45141824 #
22. bt1a ◴[] No.45141076[source]
I have been using gpt-5 through the API a bit recently, and I somewhat felt this response behavior, but it's definitely confirming to hear this from another. It's much more willing (vs gpt-4*) to tell me im a stupid piece of shxt and to not do what im asking of the initial prompt
23. ZaoLahma ◴[] No.45141474{7}[source]
It’s not “by others”. It’s by circumstance.

It’s like any other tool. If I wanted to chop wood and noticed how my axe had gone dull, the likelihood of me going “ah f*ck it” and instead go fishing increases dramatically. I want to chop wood. I don’t want to go to the neighbor and borrow his axe, or sharpen my axe and then chop wood.

That’s what has happened with ChatGPT in a sense - it has gone dull. I know it used to work “better” and the way that it works now doesn’t resonate with me in the same way, so I’m less likely to pursue work that I would want to use ChatGPT as an extrinsic motivator for.

Of course if the intrinsic motivation is large enough I wouldn’t let a tool make the decision for me. If it’s mid October and the temperature is barely above freezing and I have no wood, I’ll gnaw through it with my teeth if necessary. I’ll go full beaver. But in early September when it’s 25C outside on a Friday? If the axe isn’t perfect, I’ll have a beer and go fishing.

24. octo888 ◴[] No.45141516[source]
This reads like a submarine ad lol. Especially the second paragraph
25. the_af ◴[] No.45141824{4}[source]
I think the person you're responding to is asking "how would the tone of the response influence you into doing/not doing something"?

Obviously the actual substance of the response matters, this is not under discussion.

But does it matter whether the LLM replies "ok, cool, this is what's going on [...]" vs "You are absolutely right! You are asking all the right questions, this is very insightful of you. Here's what we should do [...]"?

replies(1): >>45142315 #
26. the_af ◴[] No.45141870{6}[source]
> If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

But that's not really the right comparison.

The right comparison is your potato peeler saying (if it could talk): "ok, let's peel some stuff" vs "Owww wheee geez! That sounds fantastic! Let's peel some potatoes, you and me buddy, yes sireee! Woweeeee!" (read in a Rick & Morty's Mr Poopybutthole voice for maximum effect).

27. steveklabnik ◴[] No.45142315{5}[source]
Hm, yeah I guess you're probably right.

I find myself not being particularly upset by the tone thing. It seems like it really upsets some other people. Or rather, I guess I should say it may subconsciously affect me, but I haven't noticed.

I do giggle when I see "You're absolutely right" because it's a meme at this point, but I haven't considered it to be offensive or enjoyable.

28. ascorbic ◴[] No.45143114{4}[source]
And the moral of that story is to buy a three pack of Kuhn Rikon peelers.
replies(1): >>45178481 #
29. Aeolun ◴[] No.45145329[source]
> I get it - we don't want LLMs to be reinforces of bad ideas, but sometimes you need a little positivity to get past a mental barrier and do something that you want to do, even if what you want to do logically doesn't make much sense.

If you want ceaseless positivity you should try Claude. The only possible way it’ll be negative is if you ask it to be.

30. peepee1982 ◴[] No.45148008{7}[source]
I choose tools based on many reason. But the vibe they give me has a lot of weight, yes.

Another example: if you give me two programming fonts to choose from that are both reasonably legible, I'll have a strong preferance for one over the other. And if I know I'm free to use my favorite programming font, I'll be more motivated to tackle a programming problem that I don't really feel like tackling because I'd rather tackler some other problem.

If the programming problem itself is interesting enough to pull me towards it, the programming font will have less of an effect on me.

Do you see where I'm going with this? A lot of little things pile up every day, each one influencing our decisions in small ways. Recognizing those things and becoming aware of them lets us - over time and many tiny adjustments - change our environment in ways that reduces friction and is conducive to our enjoyment of day-to-day life.

It's not that I necessarily won't be doing something because I'm unable to do it exactly the way I enjoy most. It'll just be more draining because now I have to put in more effort to get myself going and stay focused on the task.

31. peepee1982 ◴[] No.45178481{5}[source]
Thanks. Now I have to watch review videos for the next couple of hours and become an insufferable evangelist for the next couple of weeks.