←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0.66s | source
Show context
pembrook ◴[] No.45036880[source]
This is dumb. Nobody is writing articles about all the times the opposite happened, and ChatGPT helped prevent bad stuff.

However, because of the nature of this topic, it’s the perfect target for NYT to generate moral panic for clicks. Classic media attention bait 101.

I can’t believe HN is falling for this. It’s the equivalent of the moral panic around metal music in the 1980s where the media created a hysteria around the false idea there was hidden messages in the lyrics encouraging a teen to suicide. Millennials have officially become their parents.

If this narrative generates enough media attention, what will probably happen is OpenAI will just make their next models refuse to discuss anything related to mental health at all. This is not a net good.

replies(4): >>45037768 #>>45038190 #>>45038209 #>>45039423 #
1. ares623 ◴[] No.45038209[source]
I don’t get it. With all the evidence presented you think this situation is similar to mass hysteria?

Yes, it rhymes with what you described. But this one has hard evidence. And you’re asking to ignore it because a similar thing happened in the past?

replies(1): >>45041276 #
2. pembrook ◴[] No.45041276[source]
Yes, it’s clear there were zero other factors that led this teen to suicide. Teens have never committed suicide before ChatGPT.

Also video games lead to school shootings, music leads to teens doing drugs, and pagers are responsible for teen pregnancies.

Just look at all the evidence presented in this non-biased, objective article with no financial incentive to incite moral panic…obviously chatgpt is guilty of murder here!