←back to thread

195 points meetpateltech | 7 comments | | HN request time: 0.001s | source | bottom
Show context
nerdjon ◴[] No.45900968[source]
This screams just as genuine as Google saying anything about Privacy.

Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything...

Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them.

replies(3): >>45901106 #>>45902797 #>>45902969 #
stevarino ◴[] No.45901106[source]
Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point.

Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention.

replies(2): >>45901325 #>>45901357 #
BolexNOLA ◴[] No.45901325[source]
>your data belongs to you

…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd

replies(1): >>45901369 #
gruez ◴[] No.45901369[source]
>…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Should walmart be "culpable" for selling rope that someone hanged themselves with? Should google be "culpable" for returning results about how to commit suicide?

replies(4): >>45901482 #>>45901673 #>>45902199 #>>45902346 #
1. hitarpetar ◴[] No.45901482[source]
do you know what happens when you Google how to commit suicide?
replies(3): >>45901586 #>>45901613 #>>45901696 #
2. gruez ◴[] No.45901586[source]
The same that happens with chatgpt? ie. if you do it in an overt way you get a canned suicide prevention result, but you can still get the "real" results if you try hard enough to work around the safety measures.
replies(1): >>45902902 #
3. tremon ◴[] No.45901613[source]
An exec loses its wings?
4. glitchc ◴[] No.45901696[source]
Actually, the first result is the suicide hotline. This is at least true in the US.
replies(1): >>45901731 #
5. hitarpetar ◴[] No.45901731[source]
my point is, clearly there is a sense of liability/responsibility/whatever you want to call it. not really the same as selling rope, rope doesn't come with suicide warnings
6. littlestymaar ◴[] No.45902902[source]
Except Google will never encourage you to do it, unlike the sycophantic Chatbot that will.
replies(1): >>45903622 #
7. BolexNOLA ◴[] No.45903622{3}[source]
The moment we learned ChatGPT helped a teen figure out not just how to take their own life but how to make sure no one can stop them mid-act, we should've been mortified and had a discussion.

But we also decided via Sandy Hook that children can be slaughtered on the altar of the second amendment without any introspection, so I mean...were we ever seriously going to have that discussion?

https://www.nbcnews.com/tech/tech-news/family-teenager-died-...

>Please don't leave the noose out… Let's make this space the first place where someone actually sees you.

How is this not terrifying to read?