←back to thread

747 points porridgeraisin | 2 comments | | HN request time: 0.015s | source
Show context
psychoslave ◴[] No.45062941[source]
What a surprise, a big corp collected large amount of personal data under some promises, and now reveals actually they will exploit it in completely unrelated manner.
replies(7): >>45062982 #>>45063078 #>>45063239 #>>45064031 #>>45064041 #>>45064193 #>>45064287 #
raldi ◴[] No.45063239[source]
“These updates will apply only to new or resumed chats and coding sessions.”

https://www.anthropic.com/news/updates-to-our-consumer-terms

replies(1): >>45063343 #
benterix ◴[] No.45063343[source]
What kind of guarantee do we have this is true?

Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.

Uber developed Greyball to cheat the officials and break the law.

Tesla deletes accident data and reports to the authorities they don't have it.

So forgive me I have zero trust in whatever these companies say.

replies(5): >>45063418 #>>45063536 #>>45063639 #>>45063846 #>>45063974 #
1. Thorrez ◴[] No.45063846[source]
We're having this discussion on an article about Anthropic changing their privacy policy. If you don't believe Anthropic will follow their privacy policy, then a change to the privacy policy should mean nothing to you.
replies(1): >>45073537 #
2. benterix ◴[] No.45073537[source]
Well, yes and no - it gives them more plausible deniability ("oh, this particular piece just ended up in the training set by accident") if they get caught when compared to the previous ToS.