←back to thread

371 points ulrischa | 1 comments | | HN request time: 0.208s | source
Show context
t_mann ◴[] No.43235506[source]
Hallucinations themselves are not even the greatest risk posed by LLMs. A much greater risk (in simple terms of probability times severity) I'd say is that chat bots can talk humans into harming themselves or others. Both of which have already happened, btw [0,1]. Still not sure if I'd call that the greatest overall risk, but my ideas for what could be even more dangerous I don't even want to share here.

[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...

[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...

replies(4): >>43235623 #>>43236225 #>>43238379 #>>43238746 #
tombert ◴[] No.43235623[source]
I don't know if the model changed in the last six months, or maybe the wow factor has worn off a bit, but it also feels like ChatGPT has become a lot more "people-pleasy" than it was before.

I'll ask it opinionated questions, and it will just do stuff to reaffirm what I said, even when I give contrary opinions in the same chat.

I personally find it annoying (I don't really get along with human people pleasers either), but I could see someone using it as a tool to justify doing bad stuff, including self-harm; it doesn't really ever push back on what I say.

replies(3): >>43236142 #>>43236909 #>>43239921 #
1. unclebucknasty ◴[] No.43236909[source]
Yeah, I think it's coded to be super-conciliatory as some sort of apology for its hallucinations, but I find it annoying as well. Part of it is just like all automated prompts that try to be too human. When you know it's not human, it's almost patronizing and just annoying.

But, it's actually worse, because it's generally apologizing for something completely wrong that it told you just moments before with extreme confidence.