Most active commenters
  • (11)
  • troad(6)
  • kossTKR(3)

←back to thread

747 points porridgeraisin | 96 comments | | HN request time: 2.272s | source | bottom
1. troad ◴[] No.45062852[source]
You can opt out, but the fact that it's opt-in by default and made to look like a simple T/C update prompt leaves a sour taste in my mouth. The five year retention period seems... excessive. I wonder if they've buried anything else objectionable in the new terms.

It was the kick in the pants I needed to cancel my subscription.

replies(22): >>45062875 #>>45062894 #>>45062895 #>>45062930 #>>45062936 #>>45062949 #>>45062975 #>>45063015 #>>45063070 #>>45063116 #>>45063150 #>>45063171 #>>45063186 #>>45063387 #>>45063615 #>>45064792 #>>45064955 #>>45064986 #>>45064996 #>>45066593 #>>45070194 #>>45074231 #
2. ◴[] No.45062875[source]
3. perihelions ◴[] No.45062894[source]
What are you replacing it with?
replies(2): >>45062939 #>>45063029 #
4. kordlessagain ◴[] No.45062895[source]
> It was the kick in the pants I needed to cancel my subscription.

As if barely two 9s of uptime wasn't enough.

5. JohnnyMarcone ◴[] No.45062930[source]
I got a pop-up when I opened the app explaining the change and an option to opt out. That seems very transparent to me.
replies(7): >>45062973 #>>45063111 #>>45063442 #>>45063450 #>>45063748 #>>45064206 #>>45064407 #
6. I_am_tiberius ◴[] No.45062936[source]
"five year retention". If it's in a model once, it's there forever.
replies(3): >>45063032 #>>45064024 #>>45064875 #
7. ivape ◴[] No.45062939[source]
I like think using OpenRouter is better, but there’s absolutely no guarantee from any of the individual providers with respect to privacy and no logging.
8. demarq ◴[] No.45062949[source]
Are you sure the opt out isn’t only training? The retention does not seem affected by the toggle.
replies(2): >>45063038 #>>45063233 #
9. cube00 ◴[] No.45062973[source]
> That seems very transparent to me.

Grabbing users during start up with the less privacy focused option preselected isn't being "very transparent"

They could have forced the user to make a choice or defaulted to not training on their content but they instead they just can't help themselves.

10. episteme ◴[] No.45062975[source]
What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.
replies(5): >>45063056 #>>45063355 #>>45064689 #>>45065093 #>>45066512 #
11. smallerfish ◴[] No.45063015[source]
Settings > Privacy > Privacy Settings
replies(1): >>45063052 #
12. troad ◴[] No.45063029[source]
Two weeks left in the sub to figure it out, but I'm not yet sure. I was never all in on all the tooling, I mostly used it as smart search (e.g. ImageMagick incantations) and for trivial scripting that I couldn't be bothered writing myself, so I might just stick to whatever comes with Kagi, see if that doesn't cover me.
replies(2): >>45063084 #>>45063244 #
13. Hnrobert42 ◴[] No.45063032[source]
Is that true? Do models get rebuilt from scratch each time or do they get iterated on?
replies(1): >>45063058 #
14. jasona123 ◴[] No.45063038[source]
From the PR update: https://www.anthropic.com/news/updates-to-our-consumer-terms

“If you do not choose to provide your data for model training, you’ll continue with our existing 30-day data retention period.“

From the support page: https://privacy.anthropic.com/en/articles/10023548-how-long-...

“If you choose not to allow us to use your chats and coding sessions to improve Claude, your chats will be retained in our back-end storage systems for up to 30 days.”

15. kossTKR ◴[] No.45063052[source]
i don't see any setting related to this? just:

Export data

Shared chats

Location metadata

Review and update terms and conditions

I'm in the EU, maybe that's helping me?

replies(1): >>45063126 #
16. teekert ◴[] No.45063056[source]
Granted, it is a stretch and not near the features of Claude (no code etc), but at least Proton's Lumo [0] is very privacy oriented.

I have to admit, I've used it a bit over the last days and still reactivated my Claude pro subscription today so... Let's say it's ok for casual stuff? Also useful for casual coding questions. So if you care about it, it's an option.

[0] https://lumo.proton.me/

17. I_am_tiberius ◴[] No.45063058{3}[source]
I believe the big models currently get built from scratch (with random starting weights). That wasn't my point though. I meant a model created once, might be used for a very long time. Maybe they even release the weights at one point ("open source").
replies(1): >>45063121 #
18. wzdd ◴[] No.45063070[source]
Everywhere else in Anthropic's interface, yes/no switches show blue when enabled and black when disabled. In the box they're showing about this change the slider shows grey in both states: visit it in preferences to see the difference! It's not just disappointing but also kind of sad that someone went to the effort to do this.
replies(3): >>45063117 #>>45063179 #>>45065374 #
19. perihelions ◴[] No.45063084{3}[source]
How does Kagi (claim that they) enforce privacy rights on the major LLM providers? Have they negotiated a special contract?

I'm looking at

> "When you use the Assistant by Kagi, your data is never used to train AI models (not by us or by the LLM providers), and no account information is shared with the LLM providers. By default, threads are deleted after 24 hours of inactivity. This behavior can be adjusted in the settings."

https://help.kagi.com/kagi/ai/assistant.html#privacy

And trying to reconcile those claims with the instant thread. Anthropic is listed as one of their back-end providers. Is that data retained for five years on Anthropic's end, or 24 hours? Is that data used for training Anthropic models, or has Anthropic agreed in writing not to, for Kagi clients?

replies(2): >>45063123 #>>45063321 #
20. elashri ◴[] No.45063111[source]
> That seems very transparent to me

Implicit consent is not transparent and should be illegal in all situations. I can't tell you that unless you opt out, You have agreed to let me rent you apartment.

You can say analogy is not straightforward comparable but the overall idea is the same. If we enter a contract for me to fix your broken windows, I cannot extend it to do anything else in the house I see fit with Implicit consent.

replies(2): >>45067270 #>>45073380 #
21. merelysounds ◴[] No.45063116[source]
> opt-in by default

Nitpicking: “opt in by default” doesn’t exist, it’s either “opt in”, or “opt out”; this is “opt out”. By definition an “opt out” setting is selected by default.

replies(5): >>45063357 #>>45064080 #>>45064709 #>>45064980 #>>45065703 #
22. riz_ ◴[] No.45063117[source]
This is probably because there are laws in some countries that restrict how these buttons/switches can look (think cookie banners, where sometimes there is a huge green button to accept, and a tiny greyed out text somewhere for the settings).
replies(1): >>45064227 #
23. ◴[] No.45063121{4}[source]
24. vinnyorvinny ◴[] No.45063123{4}[source]
There is an option to opt out right? So I assume they just make sure to always opt out.
replies(1): >>45063129 #
25. croes ◴[] No.45063126{3}[source]
Have you clicked "Review and update terms and conditions"?

It's part of the update

replies(1): >>45063146 #
26. ◴[] No.45063129{5}[source]
27. kossTKR ◴[] No.45063146{4}[source]
Oh i see thanks. That's a dark design pattern, hiding stuff like that.

No one cares about anything else but they have lots of superflous text and they are calling it "help us get better", blah blah, it's "help us earn more money and potentially sell or leak your extremely private info", so they are lying.

Considering cancelling my subscription right this moment.

I hope EU at leat considers banning or extreme-fining companies trying to retroactively use peoples extremely private data like this, it's completely over the line.

replies(1): >>45063538 #
28. monegator ◴[] No.45063150[source]
I'm super duper sure that my data won't be stored and eventually used if i opt out
29. Joker_vD ◴[] No.45063171[source]
> You can opt out

You can say that you want to opt out. What Anthropic will decide to do with your declaration is a different question.

replies(1): >>45064103 #
30. senko ◴[] No.45063179[source]
Just did and it behaves as expected for me in the Android app (ie. not the dark pattern you described)
replies(1): >>45064551 #
31. ◴[] No.45063186[source]
32. zenmaster10665 ◴[] No.45063233[source]
it seems really badly designed or maybe it is meant to be confusing. It does not make it clear that the two are linked together, and you have to "accept" the both together even though there is only a toggle on the "help us make the model better" item.
33. fnordlord ◴[] No.45063244{3}[source]
I'm mostly replying because I was truly using it for an ImageMagick incantation yesterday. I use the API rather than chat, if that's an option for you. I put $20 into it every few months and it mostly does what I need. I'm using Raycast for quick and dirty questions and AnythingLLM for longer conversations.
34. FergusArgyll ◴[] No.45063321{4}[source]
They are using llm's through the API where it's the b2b world and you can get privacy
35. javierluraschi ◴[] No.45063355[source]
https://grok.com
replies(3): >>45063460 #>>45063504 #>>45063800 #
36. benterix ◴[] No.45063357[source]
This is not nitpicking, this is a sane reaction to someone modifying the meaning of words on the fly.
replies(2): >>45063412 #>>45064763 #
37. javcasas ◴[] No.45063387[source]
You can request your data to not be used. Your request will appropriately be read and redirected to /dev/null.
38. klabb3 ◴[] No.45063412{3}[source]
To be fair it trips people up all the time. Even precise terminology isn't great if people misuse it. Maybe it would have been better to just use "enabled by default".
39. DrillShopper ◴[] No.45063442[source]
It should be opt-in, not opt-out.

The fact that there's no law mandating opt-in only for data retention consent (or any anti-consumer "feature") is maddening at times

40. oblio ◴[] No.45063450[source]
Opt-in leads to very low adoption and is the moral choice.

Opt-out leads to very high adoption and is the immoral choice.

Guess which one companies adopt when not forced through legislation?

41. ehnto ◴[] No.45063460{3}[source]
From the frypan into the fire. I think the reality, proven by history and even just this short five years, is no company will hold onto their ethics in this space. This should surprise no one since the first step of the enterprise is hoovering up the worlds data without permission.
42. Arubis ◴[] No.45063504{3}[source]
Worse by every measure.
replies(1): >>45065156 #
43. klabb3 ◴[] No.45063538{5}[source]
EU or not, it baffled me that people don't see this glaring conflict of interest. AI companies both produce the model and rent out inference. In other words, you're expecting that the company that (a) desperately crave your data the most and (b) that also happen to collect large amounts of high quality data from you will simply not use it. It's like asking a child to keep your candy safe.

I'd love to live in a society where laws could effectively regulate these things. I would also like a Pony.

replies(2): >>45063733 #>>45064620 #
44. ◴[] No.45063615[source]
45. kossTKR ◴[] No.45063733{6}[source]
This is why we need actual regulation, and not the semi fascist monopolist corporatocracy we've evolved into now.

Its only utopian because it's become so incredibly bad.

We shouldn't expect less, we shouldn't push guilt or responsibility onto the consumer we should push for more, unless you actively want your neighbour, you mom, and 95% of the population to be in constant trouble with absolutely everything from tech to food safety, chemicals or healthcare - most people aren't rich engineers like on this forum and i don't want to research for 5 hours every time i buy something because some absolute psychopaths have removed all regulation and sensible defaults so someone can party on a yacht.

replies(1): >>45064824 #
46. felideon ◴[] No.45063748[source]
> seems very transparent

Except not:

> The interface design has drawn criticism from privacy advocates, as the large black "Accept" button is prominently displayed while the opt-out toggle appears in smaller text beneath. The toggle defaults to "On," meaning users who quickly click "Accept" without reading the details will automatically consent to data training.

Definitely happened to me as it was late/lazy.

47. mac-attack ◴[] No.45063800{3}[source]
What sane person would downgrade to Grok
48. whimsicalism ◴[] No.45064024[source]
yes, it’s a very big loophole. and if it’s a generative model, you can just launder the data through synthetic generation/distillation to future models
49. ◴[] No.45064080[source]
50. AlexandrB ◴[] No.45064103[source]
I look forward to this setting getting turned on again "accidentally" when new models are released or the ToS is updated.
51. insane_dreamer ◴[] No.45064206[source]
It should be off be default, with the option to opt in.
52. soulofmischief ◴[] No.45064227{3}[source]
Can you provide an example?
replies(1): >>45064505 #
53. ornornor ◴[] No.45064407[source]
It’s not. And also whether you move the toggle to on or off, you still have to click accept which really isn’t clear whether you’re accepting to share your data or not.

Never mind the complete 180 on privacy.

54. riz_ ◴[] No.45064505{4}[source]
https://www.cnil.fr/en/dark-patterns-cookie-banners-cnil-iss...
replies(1): >>45066715 #
55. BalinKing ◴[] No.45064551{3}[source]
I can confirm it's grey on both sides on the website.
replies(1): >>45065542 #
56. croes ◴[] No.45064620{6}[source]
>It's like asking a child to keep your candy safe

That's why we don't hand billions of dollars to a child. Maybe we should treat AI companies similar.

57. weregiraffe ◴[] No.45064689[source]
>What will you use instead? I’m finding Claude the best experience since ChatGPT 5 is so slow and not any better answers than 4.

You could try programming with your own brain

replies(1): >>45064821 #
58. ◴[] No.45064709[source]
59. troad ◴[] No.45064763{3}[source]
The original meaning of sane is "physically healthy". Its usual modern meaning is "mentally healthy". You're using it to mean "reasonable".

At which exact point is language prohibited from evolving, and why it super coincidentally the exact years you learnt it?

replies(2): >>45064883 #>>45066803 #
60. ethagnawl ◴[] No.45064792[source]
I wonder what happens if I don't accept the new T&C? I've been successfully dismissing an updated T&C prompt in a popular group messaging application for years -- I lack the time and legal acumen to process it -- without issue.

Also, for others who want to opt-out, the toggle is in the T&C modal itself.

replies(2): >>45065037 #>>45065092 #
61. ◴[] No.45064821{3}[source]
62. frm88 ◴[] No.45064824{7}[source]
Bravo! This has to be the most coherent and well-formulated rant I have read in a longtime. Thank you!
63. disconcision ◴[] No.45064875[source]
this is somewhat true but i'm not sure how load bearing it is. for one, i think it's going to be a while until 'we asked the model what bob said' is as admissible as the result of a database query
64. danans ◴[] No.45064883{4}[source]
> At which exact point is language prohibited from evolving

Never?

https://en.m.wikipedia.org/wiki/Semantic_change

replies(1): >>45064926 #
65. troad ◴[] No.45064926{5}[source]
Yes, that was my point.
replies(1): >>45066797 #
66. ◴[] No.45064955[source]
67. ◴[] No.45064980[source]
68. energy123 ◴[] No.45064986[source]
Has anyone asked why OpenAI has two very separate opt-out mechanisms (one in settings, the other via a formal request that you need to lodge via their privacy or platform page)? That always seemed likely to me to be hiding a technicality that allows them to train on some forms of user data.
69. nicce ◴[] No.45064996[source]
OpenAIs temporary chat still advertises that chats are stored for 30 days while there is court order that everything must be retained indefinitely. I wonder why they are not obligated to state this quite extreme retention.
70. nicce ◴[] No.45065037[source]
I tried to do that with WhatsApp and it eventually stopped working.
71. layer8 ◴[] No.45065092[source]
The new privacy policy automatically becomes effective on September 28, if you don’t already agree to it before. Anthropic states that “After September 28, you’ll need to make your selection on the model training setting in order to continue using Claude.”
72. soiltype ◴[] No.45065093[source]
Since I don't use LLMs to directly code for me, I'm going to (mis?)place my trust in Kagi assistant entirely for the time being. It claims not to associate prompts with individual accounts. Small friction of keeping a browser tab open is worth it for me for now.
73. weberer ◴[] No.45065156{4}[source]
What metrics are you looking at? Grok 4 outperforms Claude 4 Opus in the Artificial Analysis Intelligence Index.

https://artificialanalysis.ai/leaderboards/models

74. Aurornis ◴[] No.45065374[source]
It works correctly (blue on, grey off) in the iOS app. I just did it now.
75. tln ◴[] No.45065542{4}[source]
I get blue (on) / black (off) on the website. Or blue / white in light mode.

https://claude.ai/settings/data-privacy-controls

It was easy to not opt-in, I got prompted before I saw any of this.

I think they should keep the opt-in behavior past Sept 28 personally.

replies(1): >>45067375 #
76. tln ◴[] No.45065703[source]
> By definition an “opt out” setting is selected by default.

No, (IMO) an "opt out" setting / status is assumed/enabled without asking.

So, I think this is opt-in, until Sept 28.

Opt-in, whether pre-checked/pre-ticked or not, means the business asks you.

GDPR requires "affirmative, opt-in consent", perhaps we use that term to mean an opt-in, not pre-ticked.

replies(1): >>45066185 #
77. whilenot-dev ◴[] No.45066185{3}[source]
Regardless whether it's opt-in or opt-out, the business will need to confirm anything it opted for you by asking. If you don't select the opposing choice in a timely fashion, then the business assumes that it opted correctly in your interest and on your behalf.

> So, I think this is opt-in, until Sept 28.

If the business opted for consent, then you will effectively have the choice for refusal, a.k.a. opt-out.

78. nocommandline ◴[] No.45066512[source]
If you aren't using it for coding or advanced uses like video, etc, you can try running models locally on your machine using Ollama and others like it.

Self plug here - If you aren't technical and still want to run models locally, you can try our App [1]

1] https://ai.nocommandline.com

79. darepublic ◴[] No.45066593[source]
it's almost like this multi billion dollar company is misanthropic, despite their platitudes. Should I not hold my breath on Anthropic helping facilitate "an era of AI abundance for all"? (To quote a rejected PR applicant to Anthropic from the front page)
80. gpm ◴[] No.45066715{5}[source]
This link is not even remotely close to an example of the behavior you described.
81. card_zero ◴[] No.45066797{6}[source]
And here it is, evolving before your eyes: we're killing off the maladaptive mutant which was "opt-in by default". That's the evolution that is happening here.
replies(1): >>45070611 #
82. soraminazuki ◴[] No.45066803{4}[source]
Diluting the distinction between opt-in and opt-out is gaslighting, not "evolution."
replies(1): >>45070669 #
83. mystraline ◴[] No.45067270{3}[source]
As a real world counterexample, medical in the USA does this shit all the time.

Local office will do a blood draw, send it to a 3rd party analysis which isn't covered by insurance, then bill you full. And you had NO contractual relationship with the testing company.

Same scam. And its all because our government is completely captured by companies and oligopoly. Our government hasn't represented the people in a long time.

84. IAmGraydon ◴[] No.45067375{5}[source]
They’re likely A/B testing the interface change, which is why people are getting inconsistent results
85. jmward01 ◴[] No.45070194[source]
The 5 year is the real kicker. Over the next 5 years I find it doubtful that they won't keep modifying their TOS and presenting that opt out 'option' so that all it will take is one accidental click and they have all your data from the start. Also, what is to stop them from removing the opt out? Nothing says they have to give that option. 4 years and 364 days from now TOS change with no opt out and a retention increase to 10 years. By then the privacy decline will have already have been so huge nobody will even notice that this 'option' was never even real.
86. troad ◴[] No.45070611{7}[source]
That would not be evolution, that would be an attempt at creationism. There is no evolution police, and never will be.
replies(1): >>45070824 #
87. troad ◴[] No.45070669{5}[source]
That seems like an ungenerous and frankly somewhat hysterical take.

By default, you are opted in. Perfectly clear.

The purpose of language is communication, not validating your politics.

replies(1): >>45071177 #
88. danparsonson ◴[] No.45070824{8}[source]
Selection pressure is the evolution police.
replies(1): >>45071301 #
89. soraminazuki ◴[] No.45071177{6}[source]
> By default, you are opted in. Perfectly clear.

That's called opt-out. You're doing exactly what I described: gaslighting people into believing that opt-in and opt-out are synonymous, rendering the entire concept meaningless. The audacity of you labeling people as "political" while resorting to such Orwellian manipulation is astounding. How can you lecture others about the purpose of languages with a straight face when you're redefining terms to make it impossible for people to express a concept?

These are examples of what "opt-in by default" actually means. It means having the user manually consent to something every time, the polar opposite your definition.

- https://arstechnica.com/gadgets/2024/06/report-new-apple-int...

- https://github.com/rom1504/img2dataset/issues/293

It's also just pure laziness to label me as "hysterical" when PR departments of companies like Google have, like you, misused the terms opt-out and opt-in in deceptive ways.

https://news.ycombinator.com/item?id=37314981

replies(1): >>45072611 #
90. card_zero ◴[] No.45071301{9}[source]
It would be fair to compare it to selective breeding, rather than natural selection. The flip side of rejecting usage is promoting neologisms. We can do both things deliberately, I see no rule saying that language is only allowed to evolve naturally. A reasonable criticism would be that trying to change it on purpose makes for a lot of unnecessary fuss, but we can be moderate about it.
replies(1): >>45077094 #
91. Nevermark ◴[] No.45072611{7}[source]
I completely agree with you from a correctness standpoint, ...

> Diluting the distinction between opt-in and opt-out is gaslighting

> That seems like an ungenerous and frankly somewhat hysterical take.

... however, this comment was a reasonable response.

Projective framing demonstrates your own lack of concern for accuracy, clarity or conviviality, that is 180 degrees at odds with the point you are making and the site you are making it on.

replies(1): >>45073525 #
92. handoflixue ◴[] No.45073380{3}[source]
How is it "implicit" to click "I agree" to a large pop-up that takes up most of the screen?
replies(1): >>45074293 #
93. benterix ◴[] No.45073525{8}[source]
I can somehow understand the parent. If you control the language, you control the discourse. This is like the famous "I'm appalled at the negativity here on HN" comment threads when doing product launches etc. Or using euphemisms to avoid calling spade a spade.[0] People are fed up with these tricks, hence these emotional reactions.

[0] https://news.ycombinator.com/item?id=26346688

94. speckx ◴[] No.45074231[source]
I cancelled my subscription as well because of the opt-in by default.
95. danaris ◴[] No.45074293{4}[source]
Courts in various jurisdictions have found clickwrap agreements to be generally only valid for what one would expect to be common provisions within such agreements.

Essentially, because they are presented in a form that is so easy to bypass and so very common in our modern online life, provisions that give up too much to the service provider or would be too unusual or unexpected to find in such an agreement are unenforceable.

96. ◴[] No.45077094{10}[source]