←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 1 comments | | HN request time: 0s | source
Show context
tyushk ◴[] No.45138171[source]
I wonder if this is a tactic that LLM providers use to coerce the model into doing something.

Gemini will often start responses that use the canvas tool with "Of course", which would force the model into going down a line of tokens that end up with attempting to fulfill the user's request. It happens often enough that it seems like it's not being generated by the model, but instead inserted by the backend. Maybe "you're absolutely right" is used the same way?

replies(5): >>45138295 #>>45138496 #>>45138604 #>>45138641 #>>45154548 #
nicce ◴[] No.45138295[source]
It is a tactic. OpenAI is changing the tone of ChatGPT if you use casual language, for example. Sometimes even the dialect. They try to be sympathetic and supportive, even when they should not.

They fight for the user attention and keeping them on their platform, just like social media platforms. Correctness is secondary, user satisfaction is primary.

replies(3): >>45138317 #>>45138572 #>>45138779 #
ZaoLahma ◴[] No.45138572[source]
I find the GPT-5 model having turned the friendliness way, way down. Topics that previously would have rendered long and (usefully) engaging conversations are now met with an "ok cool" kind of response.

I get it - we don't want LLMs to be reinforces of bad ideas, but sometimes you need a little positivity to get past a mental barrier and do something that you want to do, even if what you want to do logically doesn't make much sense.

An "ok cool" answer is PERFECT for me to decide not to code something stupid (and learn something useful), and instead go and play video games (and learn nothing).

replies(4): >>45138741 #>>45141076 #>>45141516 #>>45145329 #
kuschku ◴[] No.45138741[source]
How would a "conversation" with an LLM influence what you decide to do, what you decide to code?

It's not like the attitude of your potato peeler is influencing how you cook dinner, so why is this tool so different for you?

replies(3): >>45138775 #>>45139404 #>>45140791 #
ZaoLahma ◴[] No.45138775[source]
Might tell it "I want to do this stupid thing" and it goes "ok cool". Previously it would have gone "Oh really? Fantastic! How do you intend to solve x?" and off you go.
replies(1): >>45138792 #
kuschku ◴[] No.45138792[source]
But why does this affect your own attitude?

Do the suggestions given by your phone's keyboard whenever you type something affect your attitude in the same way? If not, why is ChatGPT then affecting your attitude?

replies(3): >>45138909 #>>45139160 #>>45139465 #
ZaoLahma ◴[] No.45138909[source]
Using your potato peeler example:

If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

An LLM can directly influence your willingness to pursue an idea by how it responds to it. Interest and excitement, even if simulated, is more likely to make you pursue the idea than "ok cool".

replies(4): >>45139015 #>>45139030 #>>45139139 #>>45141870 #
kuschku ◴[] No.45139030[source]
> If my potato peeler told me "Why bother? Order pizza instead." I'd be obese.

But why do you let yourself be influenced so much by others, or in this case, random filler words from mindless machines?

You should listen to your own feelings, desires, and wishes, not anything or anyone else. Try to find the motivation inside of you, try to have the conversation with yourself instead of with ChatGPT.

And if someone tells you "don't even bother", maybe show more of a fighting spirit and do it with even more energy just to prove them wrong?

(I know it's easier said than done, but my therapist once told me it's necessary to learn not to rely on external motivation)

replies(2): >>45139546 #>>45141474 #
1. ZaoLahma ◴[] No.45141474{3}[source]
It’s not “by others”. It’s by circumstance.

It’s like any other tool. If I wanted to chop wood and noticed how my axe had gone dull, the likelihood of me going “ah f*ck it” and instead go fishing increases dramatically. I want to chop wood. I don’t want to go to the neighbor and borrow his axe, or sharpen my axe and then chop wood.

That’s what has happened with ChatGPT in a sense - it has gone dull. I know it used to work “better” and the way that it works now doesn’t resonate with me in the same way, so I’m less likely to pursue work that I would want to use ChatGPT as an extrinsic motivator for.

Of course if the intrinsic motivation is large enough I wouldn’t let a tool make the decision for me. If it’s mid October and the temperature is barely above freezing and I have no wood, I’ll gnaw through it with my teeth if necessary. I’ll go full beaver. But in early September when it’s 25C outside on a Friday? If the axe isn’t perfect, I’ll have a beer and go fishing.