←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 5 comments | | HN request time: 0s | source
Show context
pflenker ◴[] No.45138538[source]
Gemini keeps telling me "you've hit a common frustration/issue/topic/..." so often it is actively pushing me away from using it. It either makes me feel stupid because I ask it a stupid question and it pretends - probably to not hurt my feelings - that everyone has the same problem, or it makes me feel stupid because I felt smart about asking my super duper edge case question no one else has probably ever asked before and it tells me that everyone is wondering the same thing. Either way I feel stupid.
replies(2): >>45138612 #>>45139336 #
blinding-streak ◴[] No.45138612[source]
I don't think that's Gemini's problem necessarily. You shouldn't be so insecure.
replies(3): >>45138891 #>>45139028 #>>45141070 #
1. pflenker ◴[] No.45138891[source]
Not only is that a weird presumption about my ostensible insecurities on your end, it's also weird that the state of my own mental resilience should play any role at all when interacting with a tool.

If all other things are equal and one LLM is consistently vaguely annoying, for whatever reason, and the other isn't, I chose the other one.

Leaving myself aside, LLMs are broadly available and strongly forced onto everyone for day-to-day use, including vulnerable and insecure groups. These groups should not adapt to the tool, the tool should adapt to the users.

replies(1): >>45139178 #
2. zahlman ◴[] No.45139178[source]
> Not only is that a weird presumption about my ostensible insecurities on your end

I'm not GP but I agree that it isn't universal, nor especially healthy or productive, to have the response you describe to being told that your issue is common. It would make sense if you could e.g. hear the insincerity in a person's tone of voice, but Gemini outputs text and the concept of sincerity is irrelevant to a computer program.

Focusing on the informational content seems to me like a good idea, so as to avoid https://en.wikipedia.org/wiki/ELIZA_effect.

> it's also weird that the state of my own mental resilience should play any role at all when interacting with a tool.

When I was a university student, my own mental resilience was absolutely instrumental to deciphering gcc error messages.

> LLMs are broadly available and strongly forced onto everyone for day-to-day use

They say this kind of thing about cars and smartphones, too. Somehow I endure.

replies(2): >>45139322 #>>45141081 #
3. pflenker ◴[] No.45139322[source]
> I'm not GP but I agree that it isn't universal, nor especially healthy or productive, to have the response you describe to being told that your issue is common. It would make sense if you could e.g. hear the insincerity in a person's tone of voice, but Gemini outputs text and the concept of sincerity is irrelevant to a computer program.

I now realise that my phrasing isn't good, I thought I was using an universally-known concept, which now makes me sound as if Gemini's output is affecting me more than it does.

What I had in mind is that phenomenon that is utilised e.g. in media: a well-written whodunnit makes you feel smart because you were able to spot the thread all by yourself. Or, a poorly written game (looking at you, 80s text adventures!) lets you die and ridicules you for trying something out, making you feel stupid.

LLMs are generally tuned to make you _feel good_, partly by attempting to tap into the same psychological phenomena, but in this case it causes the polar opposite.

4. jennyholzer ◴[] No.45141081[source]
[flagged]
replies(1): >>45147774 #
5. tomhow ◴[] No.45147774{3}[source]
Please stop this.

https://news.ycombinator.com/newsguidelines.html