←back to thread

371 points ulrischa | 5 comments | | HN request time: 0s | source
Show context
t_mann ◴[] No.43235506[source]
Hallucinations themselves are not even the greatest risk posed by LLMs. A much greater risk (in simple terms of probability times severity) I'd say is that chat bots can talk humans into harming themselves or others. Both of which have already happened, btw [0,1]. Still not sure if I'd call that the greatest overall risk, but my ideas for what could be even more dangerous I don't even want to share here.

[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...

[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...

replies(4): >>43235623 #>>43236225 #>>43238379 #>>43238746 #
tombert ◴[] No.43235623[source]
I don't know if the model changed in the last six months, or maybe the wow factor has worn off a bit, but it also feels like ChatGPT has become a lot more "people-pleasy" than it was before.

I'll ask it opinionated questions, and it will just do stuff to reaffirm what I said, even when I give contrary opinions in the same chat.

I personally find it annoying (I don't really get along with human people pleasers either), but I could see someone using it as a tool to justify doing bad stuff, including self-harm; it doesn't really ever push back on what I say.

replies(3): >>43236142 #>>43236909 #>>43239921 #
1. renewiltord ◴[] No.43236142[source]
It's obvious, isn't it? The average Hacker News user, who has converged to the average Internet user, wants exactly that experience. LLMs are pretty good tools but perhaps they shouldn't be made available to others. People like me can use them but others seem to be killed when making contact. I think it's fine to restrict access to the elite. We don't let just anyone fly a fighter jet. Perhaps the average HN user should be protected from LLM interactions.
replies(1): >>43236228 #
2. tombert ◴[] No.43236228[source]
Is that really what you got from what I wrote? I wasn't suggesting that we restrict access to anyone, and I wasn't trying to imply that I'm somehow immune to the problems that were highlighted.

I mentioned that I don't like people-pleasers and I find it a bit obnoxious when ChatGPT does it. I'm sure that there might be other bits of subtle encouragement it gives me that I don't notice, but I can't elaborate on those parts because, you know, I didn't notice them.

I genuinely do not know how you got "we should restrict access" from my comment or the parent, you just extrapolated to make a pretty stupid joke.

replies(1): >>43237601 #
3. renewiltord ◴[] No.43237601[source]
Haha, I'm not claiming you're wanting that. I want that. So I'm saying it. What makes you think I was attempting to restate what you wrote?
replies(1): >>43238374 #
4. tombert ◴[] No.43238374{3}[source]
It looked like you were being sarcastic, implying I was trying to suggest that I thought I was better than the average person in regards to handling AI. Particularly this line:

> People like me can use them but others seem to be killed when making contact.

If I misread that, fair enough.

replies(1): >>43239390 #
5. renewiltord ◴[] No.43239390{4}[source]
Yeah, no, 100% sincere personal view. That guy who killed himself after using it is obviously not ready for this. Imagine killing yourself after typing in `print("Kill yourself")` at the Python REPL. The guy's an imbecile. We don't let just anyone drive a truck. I'm fine with nearly everyone being on the outside and unable to use these tools so long as I'm allowed to with as little trouble as possible.

I recognize that the view that others should not be permitted things that I should be allowed to use is generally a sarcastically expressed view, but I genuinely think it has merit. Everyone who believes these things are dangerous and everyone to whom this is obviously dangerous, like the aforementioned mentally deficient individual, shouldn't be permitted use.