←back to thread

747 points porridgeraisin | 1 comments | | HN request time: 0.21s | source
Show context
ljosifov ◴[] No.45064773[source]
Excellent. What were they waiting for up to now?? I thought they already trained on my data. I assume they train, even hope that they train, even when they say they don't. People that want to be data privacy maximalists - fine, don't use their data. But there are people out there (myself) that are on the opposite end of the spectrum, and we are mostly ignored by the companies. Companies just assume people only ever want to deny them their data.

It annoys me greatly, that I have no tick box on Google to tell them "go and adapt models I use on my Gmail, Photos, Maps etc." I don't want Google to ever be mistaken where I live - I have told them 100 times already.

This idea that "no one wants to share their data" is just assumed, and permeates everything. Like soft-ball interviews that a popular science communicator did with DeepMind folks working in medicine: every question was prefixed by litany of caveats that were all about 1) assumed aversion of people to sharing their data 2) horrors and disasters that are to befall us should we share the data. I have not suffered any horrors. I'm not aware of any major disasters. I'm aware of major advances in medicine in my lifetime. Ultimately the process does involve controlled data collection and experimentation. Looks a good deal to me tbh. I go out of my way to tick all the NHS boxes too, to "use my data as you see fit". It's an uphill struggle. The defaults are always "deny everything". Tick boxes never go away, there is no master checkbox "use any and all of my data and never ask me again" to tick.

replies(16): >>45064814 #>>45064872 #>>45064877 #>>45064889 #>>45064911 #>>45064921 #>>45064967 #>>45064974 #>>45064988 #>>45065001 #>>45065005 #>>45065065 #>>45065128 #>>45065333 #>>45065457 #>>45065554 #
AlexandrB ◴[] No.45065333[source]
I think I'd have more understanding for this position if I thought that these companies were still fundamentally interested in serving their users. They are not. Any information you provide is more likely to be used against your interests (even if that's "just" targeting you with some ads for a scammy product) than for your benefit.

Basically all AI companies are fruit from the same VC-poisoned tree and I expect these products will get worse and more user-hostile as they try to monetize. We're currently living in the "MoviePass"[1] era of AI where users are being heavily subsidized to try to gain market share. It will not last and the potential for abuse is enormous.

[1] https://en.wikipedia.org/wiki/MoviePass

replies(1): >>45068317 #
ljosifov ◴[] No.45068317[source]
Whether Google is interested in serving me or not, is not only untestable (i.e. what counts as 'Google', 'interested', and 'serving' there - one could argue to end of time) - but besides the point. I want to be able to tell Google "My home is XYZ", and for Google to use that information about me in all of Google ecosystem. When I talk to Gemini it should know what/where "LJ home" is, when I write in Gdoc it should know my home address (so to insert it if I want it), ditto for Gmail, when I search in Google photos "photos taken at home" it should also know what "home" is for me.

Atm Google vaguely knows, and uses that for Ads targeting, sometimes. Most of the time - the targeting is bad, very low quality slop. To the level of "he bought a mattress yesterday, will keep buying mattresses in the next 30-60 days". I have the impression that we ended up in the worst case scenario. People I don't want to have my data, have access to it. People I do want to have my data, are afraid to touch it, and use it - yes! - for theirs, but also for my benefit too. The current predicament seems to me the case of "public lies, private truths."

A small cadre of vocal proponents of a particular view, established "the ground truth to what is desirable". (in this case - maximum privacy, ideally zero information sharing) The public goes with it in words, pays lip service, while in deeds, the revealed preferences show, they value their data privacy very cheaply, almost zero. Even one click extra, to share their data less, is one click too many, effort too high, for most people. Again - these are revealed preferences, for people keep lying when asked. It's not even the case of "you are lying to me" - no, it's more like "you are lying to yourself."

The conventional opinion is that the power imbalance coming from the information imbalance (state/business know a lot about me; I know little about them) is that us citizens and consumers should reduce our "information surface" towards them. And address the imbalance that way. But. There exists another, often unmentioned option. And that option is for state/business to open up, to increase their "information surface" towards us, their citizens/consumers. That will also achieve information (and one hopes power) rebalance. Yes there is extra work on part of state/business to open their data to us. But it's worth it. The more advanced the society, the more coordination it needs to achieve the right cooperation-competition balance in the interactions between ever greater numbers of people. There is an old book "Data For the People" by an early AI pioneer and Amazon CTO Andreas Weigend. Afaics it well describes the world we live in, and also are likely to live even more in the future.

replies(1): >>45070922 #
danparsonson ◴[] No.45070922[source]
You started by saying that it's difficult or impossible to define what 'serving the user' looks like, then immediately gave examples of what it would look like to you. It's not that Google can't do these things or is afraid to, but rather that operating in your best interests does not benefit their shareholders. Sure, it'd be great if we could all just get along, but we're living in the worst case scenario you describe because we can't all just get along. Not trusting companies like Google with your personal data is the pragmatic choice; regardless of what they could do with our data, what they actually do with it is what counts.

Side note: they know exactly where you live. My colleague's Android used to tell him, without any prompting or specific configuration, how long his drive home from work would take that day. That was over ten years ago.

replies(1): >>45073056 #
1. ljosifov ◴[] No.45073056[source]
Yes - I meant 'impossible to difficult' to define to all people, at all times. Agree it's easy for me to define how that looks. It doesn't mean that the same is true to you. That's why I went from a very general, to very specific.

I'm saying we ended up in situation where people are lying when they say "I don't trust Google", b/c they have Gmail, use Google services - so their trust can't be zero. It's more than zero. Obviously it's a trade-off, people are pragmatic they do their cost-benefit analysis, and act accordingly. They just lie when they talk about the subject. I think it'd be better for all, if the public discussion moved from "I trust Google zero" (which is obviously untrue), to "There is cost-benefit to this, and I personally chose xyz".