←back to thread

439 points diggan | 6 comments | | HN request time: 1.131s | source | bottom
Show context
AlecSchueler ◴[] No.45062904[source]
Am I the only one that assumed everything was already being used for training?
replies(9): >>45062929 #>>45063168 #>>45063951 #>>45064966 #>>45065323 #>>45065428 #>>45065912 #>>45066950 #>>45070135 #
Aurornis ◴[] No.45065912[source]
I don't understand this mindset. Why would you assume anything? It took me a couple minutes at most to check when I first started using Claude.

I check when I start using any new service. The cynical assumption that everything's being shared leads to shrugging it off and making no attempt to look for settings.

It only takes a moment to go into settings -> privacy and look.

replies(7): >>45065932 #>>45065968 #>>45066053 #>>45066125 #>>45068206 #>>45068998 #>>45070223 #
1. lbrito ◴[] No.45066125[source]
>Why would you assume anything?

Because they already used data without permission on a much larger scale, so it's a perfectly logical assumption that they would continue doing so with their users?

replies(1): >>45067797 #
2. simonw ◴[] No.45067797[source]
I don't think that logically makes sense.

Training on everything you can publicly scrape from the internet is a very different thing from training on data that your users submit directly to your service.

replies(2): >>45069962 #>>45070009 #
3. rpgbr ◴[] No.45069962[source]
>Training on everything you can publicly scrape from the internet is a very different thing from training on data that your users submit directly to your service.

Yes. It's way easier and cheaper when the data comes to you instead of having to scrape everything elsewhere.

4. fcarraldo ◴[] No.45070009[source]
OpenAI, Meta and X all train from user submitted data, in Meta and X’s case data that had been submitted long before the advent of LLMs.

It’s not a leap to assume Anthropic does the same.

replies(1): >>45072303 #
5. adastra22 ◴[] No.45072303{3}[source]
By X do you mean tweets? Can you not see how different that is from training on your private conversations with an LLM?

What if you ask it for medical advice, or legal things? What if you turn on Gmail integration? Should I now be able to generate your conversations with the right prompt?

replies(1): >>45085938 #
6. fcarraldo ◴[] No.45085938{4}[source]
I don't think AI companies should be doing this, but they are doing it. All are opt-out, not opt-in. Anthropic is just changing their policies to be the same as their competition.

xAI trains Grok on both public data (Tweets) and non-public data (Conversations with Grok) by default. [0]

> Grok.com Data Controls for Training Grok: For the Grok.com website, you can go to Settings, Data, and then “Improve the Model” to select whether your content is used for model training.

Meta trains its AI on things posted to Meta's products, which are not as "public" as Tweets on X, because users expect these to be shared only with their networks. They do not use DMs, but they do use posts to Instagram/Facebook/etc. [1]

> We use information that is publicly available online and licensed information. We also use information shared on Meta Products. This information could be things like posts or photos and their captions. We do not use the content of your private messages with friends and family to train our AIs unless you or someone in the chat chooses to share those messages with our AIs.

OpenAI uses conversations for training data by default [2]

> When you use our services for individuals such as ChatGPT, Codex, and Sora, we may use your content to train our models.

> You can opt out of training through our privacy portal by clicking on “do not train on my content.” To turn off training for your ChatGPT conversations and Codex tasks, follow the instructions in our Data Controls FAQ. Once you opt out, new conversations will not be used to train our models.

[0] https://x.ai/legal/faq

[1] https://www.facebook.com/privacy/genai/

[2] https://help.openai.com/en/articles/5722486-how-your-data-is...