https://www.anthropic.com/news/updates-to-our-consumer-terms
Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.
Uber developed Greyball to cheat the officials and break the law.
Tesla deletes accident data and reports to the authorities they don't have it.
So forgive me I have zero trust in whatever these companies say.
None. And even if it's the nicest goody two shoes company in the history of capitalism, the NSA will have your data and then there'll be a breach and then Russian cyber criminals will have it too.
At this point I'm with you on the zero trust: we should be shouting loud and clear to everyone, if you put data into a web browser or app, that data will at some point be sold for profit without any say so from you.
If you don’t take companies at their word, you need to be consistent about it.
Where did these companies claim they didn’t do this?
Even websites can be covered by copyright. It has always been known that they trained on copyrighted content. The output is considered derivative and therefore it’s not illegal.
I don't own a car and only take public transit or bike. I fill my transit card with cash. I buy food in cash from the farmer's morning market. My tv isn't connected to the Internet, it's connected to a raspberry pi which is connected to my home lab running jellyfin and a YouTube archiving software. I de Googled and use an old used phone and foss apps.
It's all happened so gradually I didn't even realize how far I'd gone!
If your threat model is to unconditionally not trust the companies, what they're saying is irrelevant. Which is fair enough, you probably should not be using a service you don't trust at all. But there's not much of a discussion to be had when you can just assert that everything they say is a lie.
> Meta downloaded copyrighted content and trained their models on it, OpenAI did the same.
> Uber developed Greyball to cheat the officials and break the law.
These seem like randomly chosen generic grievances, not examples of companies making promises in their privacy policy (or similar) and breaking them. Am I missing some connection?
My point is that whenever we send our data to a third party, we can assume it could be abused, either unintentionally (by a hack, mistake etc.) or intentionally, because these companies are corrupted to the core and have a very relaxed attitude to obeying the law in general as these random examples show.
Well, this is what they claim. In practice, this is untrue on several levels. First, earlier OpenAI models were able to quote verbatim, and they were maimed later not to do that. Second, there were several lawsuits against OpenAI and not all of them ended. And finally, assuming that courts decide what they did was legal would mean everyone can legally download and use a copy of Libgen (part of "Books3") whereas the courts around the world are doing the opposite and are blocking access to Libgen country by country. So unless you set double standards, something is not right here. Even Meta employees torrenting Lingen knew that so let's not pretend we buy this rhetoric.