←back to thread

747 points porridgeraisin | 3 comments | | HN request time: 0.484s | source
Show context
psychoslave ◴[] No.45062941[source]
What a surprise, a big corp collected large amount of personal data under some promises, and now reveals actually they will exploit it in completely unrelated manner.
replies(7): >>45062982 #>>45063078 #>>45063239 #>>45064031 #>>45064041 #>>45064193 #>>45064287 #
raldi ◴[] No.45063239[source]
“These updates will apply only to new or resumed chats and coding sessions.”

https://www.anthropic.com/news/updates-to-our-consumer-terms

replies(1): >>45063343 #
benterix ◴[] No.45063343[source]
What kind of guarantee do we have this is true?

Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.

Uber developed Greyball to cheat the officials and break the law.

Tesla deletes accident data and reports to the authorities they don't have it.

So forgive me I have zero trust in whatever these companies say.

replies(5): >>45063418 #>>45063536 #>>45063639 #>>45063846 #>>45063974 #
1. jsnell ◴[] No.45063974[source]
If it were a lie, why take the PR hit of telling the truth about starting to train on user data but lying about the specifics? It'd be much simpler to just lie about not training on user data at all.

If your threat model is to unconditionally not trust the companies, what they're saying is irrelevant. Which is fair enough, you probably should not be using a service you don't trust at all. But there's not much of a discussion to be had when you can just assert that everything they say is a lie.

> Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.

> Uber developed Greyball to cheat the officials and break the law.

These seem like randomly chosen generic grievances, not examples of companies making promises in their privacy policy (or similar) and breaking them. Am I missing some connection?

replies(2): >>45064157 #>>45073554 #
2. ravishi ◴[] No.45064157[source]
It's all PR. Some people won't read the details and just assume it will train on all data. Some people might complain and they tell it was a bug or a minor slip. And moving forward, after a few months, nobody will remember it was ever different. And some might vaguely remember them saying something about it at some point or something like that.
3. benterix ◴[] No.45073554[source]
> These seem like randomly chosen generic grievances, not examples of companies making promises in their privacy policy (or similar) and breaking them. Am I missing some connection?

My point is that whenever we send our data to a third party, we can assume it could be abused, either unintentionally (by a hack, mistake etc.) or intentionally, because these companies are corrupted to the core and have a very relaxed attitude to obeying the law in general as these random examples show.