To be honest, these companies already stole terabytes of data and don't even disclose their dataset, so you have to assume they'll steal and train at anything you throw at them
replies(4):
I trust people until they give me cause to do otherwise.
I asked Claude: "If a company has a privacy policy and says they will not train on your data and then decides to change the policy in order "to make the models better for everyone." What should the terms be?"
The model suggests in the first paragraph or so EXPLICIT OPT IN. Not Opt OUT