←back to thread

747 points porridgeraisin | 1 comments | | HN request time: 0s | source
Show context
ljosifov ◴[] No.45064773[source]
Excellent. What were they waiting for up to now?? I thought they already trained on my data. I assume they train, even hope that they train, even when they say they don't. People that want to be data privacy maximalists - fine, don't use their data. But there are people out there (myself) that are on the opposite end of the spectrum, and we are mostly ignored by the companies. Companies just assume people only ever want to deny them their data.

It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.

This idea that "no one wants to share their data" is just assumed, and permeates everything. Like soft-ball interviews that a popular science communicator did with DeepMind folks working in medicine: every question was prefixed by litany of caveats that were all about 1) assumed aversion of people to sharing their data 2) horrors and disasters that are to befall us should we share the data. I have not suffered any horrors. I'm not aware of any major disasters. I'm aware of major advances in medicine in my lifetime. Ultimately the process does involve controlled data collection and experimentation. Looks a good deal to me tbh. I go out of my way to tick all the NHS boxes too, to "use my data as you see fit". It's an uphill struggle. The defaults are always "deny everything". Tick boxes never go away, there is no master checkbox "use any and all of my data and never ask me again" to tick.

replies(16): >>45064814 #>>45064872 #>>45064877 #>>45064889 #>>45064911 #>>45064921 #>>45064967 #>>45064974 #>>45064988 #>>45065001 #>>45065005 #>>45065065 #>>45065128 #>>45065333 #>>45065457 #>>45065554 #
12ian34 ◴[] No.45064814[source]
not remotely worried about leaks, hacks, or sinister usage of your data?
replies(3): >>45064920 #>>45065057 #>>45072864 #
ljosifov ◴[] No.45065057[source]
If they leaked bank accounts numbers, or private keys - I would be worried. That has not happened in the past.

About myself personally - my Name Surname is googleable, I'm on the open electoral register, so my address is not a secret, my company information is also open in the companies register, I have a a personal website I have put up willingly and share information about myself there. Training models on my data doesn't seem riskier than that.

Yeah, I know I'd be safer if I was completely dark, opaque to the world. I like the openness though. I also think my life has been enriched in infinitely many ways by people sharing parts of their lives via their data with me. So it would be mildly sociopathic of me, if I didn't do similar back to the world, to some extent.

replies(2): >>45065103 #>>45068227 #
int_19h ◴[] No.45068227[source]
LLMs can and do sometimes regurgitate parts of training data verbatim - this has been demonstrated many times on things ranging from Wikipedia articles to code snippets. Yes, it is not particularly likely for that damning private email of yours to be memorized, but if you throw a dataset with millions of private emails onto a model, it will almost certainly memorize some of them, and nobody knows what exact sequence of input tokens might trigger it to recite.
replies(1): >>45072902 #
1. ljosifov ◴[] No.45072902[source]
That's a consideration, for sure. But given the LLM-s have not got the ground truth, everything is controlled hallucination, then - if the LLM tells you an imperfect version of my email or chat, you can never be sure if what the LLM told you is true, or not. So maybe you don't gain that much extra knowledge about me. For example, you can reasonably guess I'm typing this on the computer, and having coffee too. So if you ask the LLM "tell me a trivial story", and LLM comes back with "one morning, LJ was typing HN replies on the computer while having his morning coffee" - did you learn that much new about me, that you didn't know or could guess before?