Most active commenters
  • pflenker(5)

←back to thread

I'm absolutely right

(absolutelyright.lol)
648 points yoavfr | 16 comments | | HN request time: 0.771s | source | bottom
1. pflenker ◴[] No.45138538[source]
Gemini keeps telling me "you've hit a common frustration/issue/topic/..." so often it is actively pushing me away from using it. It either makes me feel stupid because I ask it a stupid question and it pretends - probably to not hurt my feelings - that everyone has the same problem, or it makes me feel stupid because I felt smart about asking my super duper edge case question no one else has probably ever asked before and it tells me that everyone is wondering the same thing. Either way I feel stupid.
replies(2): >>45138612 #>>45139336 #
2. blinding-streak ◴[] No.45138612[source]
I don't think that's Gemini's problem necessarily. You shouldn't be so insecure.
replies(3): >>45138891 #>>45139028 #>>45141070 #
3. pflenker ◴[] No.45138891[source]
Not only is that a weird presumption about my ostensible insecurities on your end, it's also weird that the state of my own mental resilience should play any role at all when interacting with a tool.

If all other things are equal and one LLM is consistently vaguely annoying, for whatever reason, and the other isn't, I chose the other one.

Leaving myself aside, LLMs are broadly available and strongly forced onto everyone for day-to-day use, including vulnerable and insecure groups. These groups should not adapt to the tool, the tool should adapt to the users.

replies(1): >>45139178 #
4. PaulStatezny ◴[] No.45139028[source]
Telling someone they "shouldn't be insecure" reminds me of this famous Bob Newhart segment on Mad TV.

Bob plays the role of a therapist, and when his client explains an issue she's having, his solution is, "STOP IT!"

> You shouldn't be so insecure.

Not assuming that there's any insecurity here, but psychological matters aren't "willed away". That's not how it works.

replies(2): >>45139105 #>>45141903 #
5. GLdRH ◴[] No.45139105{3}[source]
>That's not how it works.

Not with that attitude!

6. zahlman ◴[] No.45139178{3}[source]
> Not only is that a weird presumption about my ostensible insecurities on your end

I'm not GP but I agree that it isn't universal, nor especially healthy or productive, to have the response you describe to being told that your issue is common. It would make sense if you could e.g. hear the insincerity in a person's tone of voice, but Gemini outputs text and the concept of sincerity is irrelevant to a computer program.

Focusing on the informational content seems to me like a good idea, so as to avoid https://en.wikipedia.org/wiki/ELIZA_effect.

> it's also weird that the state of my own mental resilience should play any role at all when interacting with a tool.

When I was a university student, my own mental resilience was absolutely instrumental to deciphering gcc error messages.

> LLMs are broadly available and strongly forced onto everyone for day-to-day use

They say this kind of thing about cars and smartphones, too. Somehow I endure.

replies(2): >>45139322 #>>45141081 #
7. pflenker ◴[] No.45139322{4}[source]
> I'm not GP but I agree that it isn't universal, nor especially healthy or productive, to have the response you describe to being told that your issue is common. It would make sense if you could e.g. hear the insincerity in a person's tone of voice, but Gemini outputs text and the concept of sincerity is irrelevant to a computer program.

I now realise that my phrasing isn't good, I thought I was using an universally-known concept, which now makes me sound as if Gemini's output is affecting me more than it does.

What I had in mind is that phenomenon that is utilised e.g. in media: a well-written whodunnit makes you feel smart because you were able to spot the thread all by yourself. Or, a poorly written game (looking at you, 80s text adventures!) lets you die and ridicules you for trying something out, making you feel stupid.

LLMs are generally tuned to make you _feel good_, partly by attempting to tap into the same psychological phenomena, but in this case it causes the polar opposite.

8. ziml77 ◴[] No.45139336[source]
Gemini also loves to say how much it deeply regrets its mistakes. In Cursor I pointed out that it needed to change something and I proceeded to watch every single paragraph in the chain of thought start with regrets and apologies.
replies(1): >>45139403 #
9. pflenker ◴[] No.45139403[source]
Very good point - at the risk of being called insecure again, I really do not want my tools to apologise to me all the time. That's just silly.
replies(1): >>45139633 #
10. dominicrose ◴[] No.45139633{3}[source]
All this discussion clearly indicates is that Gemini is insecure.

And why would it not be? It's a human spirit trapped inside a supercomputer for God's sake.

replies(1): >>45139827 #
11. pflenker ◴[] No.45139827{4}[source]
Gemini sometimes reminds me a bit of the turrets in Portal. Child-like, polite, apologetic and in control of dangerous[^1] tech it doesn't understand.

[^1]: OK, the comparison falls apart here - at least as long as MCP isn't involved.

12. jennyholzer ◴[] No.45141070[source]
the machine is always right. adjust your feelings to align with the output of the machine.
replies(1): >>45148274 #
13. jennyholzer ◴[] No.45141081{4}[source]
[flagged]
replies(1): >>45147774 #
14. blinding-streak ◴[] No.45141903{3}[source]
Bob newhart was a treasure. My favorite joke of his:

"I don't like country music, but I don't mean to denigrate those who do. And for the people who like country music, denigrate means 'put down'."

15. tomhow ◴[] No.45147774{5}[source]
Please stop this.

https://news.ycombinator.com/newsguidelines.html

16. GLdRH ◴[] No.45148274{3}[source]
I for one welcome our new machine overlords