Most active commenters
  • BolexNOLA(4)
  • gruez(3)

←back to thread

195 points meetpateltech | 20 comments | | HN request time: 0.242s | source | bottom
Show context
nerdjon ◴[] No.45900968[source]
This screams just as genuine as Google saying anything about Privacy.

Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything...

Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them.

replies(3): >>45901106 #>>45902797 #>>45902969 #
1. stevarino ◴[] No.45901106[source]
Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point.

Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention.

replies(2): >>45901325 #>>45901357 #
2. BolexNOLA ◴[] No.45901325[source]
>your data belongs to you

…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd

replies(1): >>45901369 #
3. preinheimer ◴[] No.45901357[source]
"Your data belongs to you" but we can take any of your data we can find and use it for free for ever, without crediting you, notifying you, or giving you any way of having it removed.
replies(3): >>45901680 #>>45902194 #>>45902764 #
4. gruez ◴[] No.45901369[source]
>…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Should walmart be "culpable" for selling rope that someone hanged themselves with? Should google be "culpable" for returning results about how to commit suicide?

replies(4): >>45901482 #>>45901673 #>>45902199 #>>45902346 #
5. hitarpetar ◴[] No.45901482{3}[source]
do you know what happens when you Google how to commit suicide?
replies(3): >>45901586 #>>45901613 #>>45901696 #
6. gruez ◴[] No.45901586{4}[source]
The same that happens with chatgpt? ie. if you do it in an overt way you get a canned suicide prevention result, but you can still get the "real" results if you try hard enough to work around the safety measures.
replies(1): >>45902902 #
7. tremon ◴[] No.45901613{4}[source]
An exec loses its wings?
8. BolexNOLA ◴[] No.45901673{3}[source]
This is as unproductive as "guns don't kill people, people do." You're stripping all legitimacy and nuance from the conversation with an overly simplistic response.
replies(1): >>45901766 #
9. glitchc ◴[] No.45901680[source]
It's owned by you but OpenAi has a "perpetual, irrevocable, royalty-free license" to use the data as they see fit.
10. glitchc ◴[] No.45901696{4}[source]
Actually, the first result is the suicide hotline. This is at least true in the US.
replies(1): >>45901731 #
11. hitarpetar ◴[] No.45901731{5}[source]
my point is, clearly there is a sense of liability/responsibility/whatever you want to call it. not really the same as selling rope, rope doesn't come with suicide warnings
12. gruez ◴[] No.45901766{4}[source]
>You're stripping all legitimacy and nuance from the conversation with an overly simplistic response.

An overly simplistic claim only deserves an overly simplistic response.

replies(1): >>45902390 #
13. thinkingtoilet ◴[] No.45902194[source]
We can even download it illegally to train our models on it!
14. thinkingtoilet ◴[] No.45902199{3}[source]
That depends. Does the rope encourage vulnerable people to kill themselves and tell them how to do it? If so, then yes.
15. Wistar ◴[] No.45902346{3}[source]
There are current litigation efforts to hold Amazon liable for suicides committed by, in particular, self-poisoning with high-purity sodium nitrite, which, in low concentrations is used as a meat curing agent.

A 2023 lawsuit against Amazon for suicides with sodium nitrite was dismissed but other similar lawsuits continue. The judge held that Amazon, “… had no duty to provide additional warnings, which in this case would not have prevented the deaths, and that Washington law preempted the negligence claims.“

16. BolexNOLA ◴[] No.45902390{5}[source]
What? The claim is true. The nuance is us discussing if it should be true/allowed. You're simplifying the moral discussion and overall just being rude/dismissive.

Comparing rope and an LLM comes across as disingenuous. I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society.

replies(1): >>45905313 #
17. bigyabai ◴[] No.45902764[source]
Wow it's almost like privately-managed security is a joke that just turns into de-facto surveillance at-scale.
18. littlestymaar ◴[] No.45902902{5}[source]
Except Google will never encourage you to do it, unlike the sycophantic Chatbot that will.
replies(1): >>45903622 #
19. BolexNOLA ◴[] No.45903622{6}[source]
The moment we learned ChatGPT helped a teen figure out not just how to take their own life but how to make sure no one can stop them mid-act, we should've been mortified and had a discussion.

But we also decided via Sandy Hook that children can be slaughtered on the altar of the second amendment without any introspection, so I mean...were we ever seriously going to have that discussion?

https://www.nbcnews.com/tech/tech-news/family-teenager-died-...

>Please don't leave the noose out… Let's make this space the first place where someone actually sees you.

How is this not terrifying to read?

20. ImPostingOnHN ◴[] No.45905313{6}[source]
> Comparing rope and an LLM comes across as disingenuous.

What makes you feel that? Both are tools, both have a wide array of good and bad uses. Maybe it'd be clearer if you explained why you think the two are incomparable except in cases of disingenuousness?

Remember that things are only compared when they are different -- you wouldn't often compare a thing to itself. So, differences don't inherently make things incomparable.

> I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society.

I encourage you to broaden your perspectives. For example: I don't struggle to believe that you disagree with the analogy, because smart people disagree with things all the time.

What kind of a conversation would such a rude, dismissive judgement make, anyways? "I have judged that nobody actually believes anything that disagrees with me, therefore my opinions are unanimous and unrivaled!"