I check when I start using any new service. The cynical assumption that everything's being shared leads to shrugging it off and making no attempt to look for settings.
It only takes a moment to go into settings -> privacy and look.
They’re assuming that Anthropic that is already receiving and storing your data, is also training their models on that data.
How are you supposed to disprove that as a user?
Also, the whole point is that companies cannot be trusted to follow the settings.
Do you have any reason to think this does anything?
So your assumption is that the reported privacy policy of any company is completely accurate. There there is no means for the company to violate this policy and that once violated you will immediately be notified.
> It only takes a moment to go into settings -> privacy and look.
It only takes a moment to examine history and observe why this is wholly inadequate.
Yes. It's way easier and cheaper when the data comes to you instead of having to scrape everything elsewhere.
https://www.reuters.com/sustainability/boards-policy-regulat...
It’s shocking to me that anyone who works in our industry would trust any company to do as they claim.
What if you ask it for medical advice, or legal things? What if you turn on Gmail integration? Should I now be able to generate your conversations with the right prompt?
xAI trains Grok on both public data (Tweets) and non-public data (Conversations with Grok) by default. [0]
> Grok.com Data Controls for Training Grok: For the Grok.com website, you can go to Settings, Data, and then “Improve the Model” to select whether your content is used for model training.
Meta trains its AI on things posted to Meta's products, which are not as "public" as Tweets on X, because users expect these to be shared only with their networks. They do not use DMs, but they do use posts to Instagram/Facebook/etc. [1]
> We use information that is publicly available online and licensed information. We also use information shared on Meta Products. This information could be things like posts or photos and their captions. We do not use the content of your private messages with friends and family to train our AIs unless you or someone in the chat chooses to share those messages with our AIs.
OpenAI uses conversations for training data by default [2]
> When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models.
> You can opt out of training through our privacy portal by clicking on “do not train on my content.” To turn off training for your ChatGPT conversations and Codex tasks, follow the instructions in our Data Controls FAQ. Once you opt out, new conversations will not be used to train our models.
[1] https://www.facebook.com/privacy/genai/
[2] https://help.openai.com/en/articles/5722486-how-your-data-is...