Most active commenters
  • bri3d(3)

←back to thread

237 points robin_reala | 14 comments | | HN request time: 1.023s | source | bottom
Show context
scq ◴[] No.43514594[source]
This seems like a bug in the ScreenAI service? There's no evidence whatsoever for his claim that Google "trains a machine vision model on the contents of my screen".

According to https://chromium.googlesource.com/chromium/src/+/main/servic... it is just inference.

> These functionalities are entirely on device and do not send any data to network or store on disk.

There is also this description in the Chrome OS source code:

> ScreenAI is a binary to provide AI based models to improve assistive technologies. The binary is written in C++ and is currently used by ReadAnything and PdfOcr services on Chrome OS.

replies(2): >>43514631 #>>43516050 #
1. bri3d ◴[] No.43514631[source]
This. Go to chrome://flags and disable “Enable OCR For Local Image Search” and I bet the problem goes away.

It’s a stupid feature for Google to enable by default on systems that are generally very low spec and badly made, but it’s not some evil data slurp. One of the most obnoxious things about enshittification is the corrosive effect it seems to have had on technical users’ curiosity: instead of researching and fixing problems, people now seem very prone to jump to “the software is evil and bad” and give up at doing any kind of actual investigation.

replies(8): >>43514714 #>>43514728 #>>43514740 #>>43514895 #>>43514932 #>>43515032 #>>43515378 #>>43515585 #
2. NoNotTheDuo ◴[] No.43514714[source]
> but it’s not some evil data slurp.

Not yet anyway. We’ve just seen Amazon change how all Echo’s/Alexa’s operate. It has been local-only for years and years, but now they want the audio data, so they’ve changed the Terms of Service. There’s no reason to believe Google won’t do the same thing sometime in the future.

replies(1): >>43514947 #
3. throwaway48476 ◴[] No.43514728[source]
In other words, tech companies have lost the benefit of doubt.

That's what a decade of enshittification gets them.

replies(1): >>43514913 #
4. ashoeafoot ◴[] No.43514895[source]
And what were you doing when they took over, dad? Oh, i was intestinal villian, secreting liberal free choice messages for a cooperation that didn't even pay me.
5. oefrha ◴[] No.43514913[source]
A decade ago people were also posting these outrage posts about Big Tech (and Small Tech) that more often than not turned out to be bugs/nothingburgers. I was here.
replies(1): >>43517922 #
6. bbarnett ◴[] No.43514932[source]
It’s a stupid feature for Google to enable

Enter Google 2025!

No longer just terrible search due to lack of care, and conflict of interest.

Instead, now terrible search due to AI, terrible everything due to AI, pushed everywhere and everyplace, degrading and reducing capabilities ecosystem wide.

Ridiculous and just often wrong AI gibberish on search pages, Android camera apps that blur people's faces when trying to "enhance" pics you take, and of course replacing OCR stuff that works well, with some half finished buggy AI junk.

From their doctored and made up AI demos, to an inability to make anything stable or of quality, Google has turned from world class to Nikola in a short couple of years.

7. simpaticoder ◴[] No.43514947[source]
"Trauma" is when one horrible experience lowers your danger threshold so much that it triggers on everything, and becomes useless and harmful. "Learning" is when new threat awareness lowers the threshold an 'appropriate amount'. Even if the GP was strictly wrong about their conclusion, in my personal opinion they are quite right to remain vigilant.

Note to parent: it is strictly unfair to lump Google in with Amazon (and if you demonize a good actor long enough, eventually they'll aquiesce since they are already paying the reputational price). However given that they are American corporations operating on similar incentives during the Wild West (or World War) of AI aka WWAI, it makes sense to be suspicious. Heaven knows "reputational downside" is just about the only counter-veiling incentive left, since Trump has stripped consumers and investors of virtually all legal protection (see: CFPB elimination; SEC declines Hawk Tua coin grift prosecution; Trump pardons Trevor Milton). I think it is an excellent time for all of us to be extremely careful with the software we use.

replies(1): >>43515036 #
8. TeMPOraL ◴[] No.43515032[source]
> One of the most obnoxious things about enshittification is the corrosive effect it seems to have had on technical users’ curiosity: instead of researching and fixing problems, people now seem very prone to jump to “the software is evil and bad” and give up at doing any kind of actual investigation.

There's little here worth being curious about. Tech companies made sure of that. They mostly aren't doing anything particularly groundbreaking in situations like these - they're doing the stupid or the greedy thing. And, on the off chance that the tech involved is in any way interesting, it tends to have decades of security research behind it applied to mathematically guarantee we can't use it for anything ourselves - and in case that isn't enough, there's also decades of legal experience applied to stop people from building on top of the tech.

Nah, it's one thing to fix bugs for the companies back when they tried or pretended to be friendly; these days, when half the problems are intentional malfeatures or bugs in those malfeatures, it stops being fun. There are other things to be curious about, that aren't caused by attempts to disenfranchise regular computer users.

replies(1): >>43515305 #
9. broknbottle ◴[] No.43515036{3}[source]
Google is an Advertisement company. Everything they do revolves around slurping up the most valuable data to better identify people and be able to identify trends. They’ve become increasingly less and less open as year goes by and they still haven’t found their next big cash cow to offset decline to their current cash cow.
10. bri3d ◴[] No.43515305[source]
> There's little here worth being curious about.

I’m all for OP returning the computer Google broke, as sibling comments have suggested, but the curiosity route would have been fruitful for them too; I’m pretty sure the flag I posted or one of the adjacent ones will fix their issue.

I also personally found this feature kind of interesting of itself; I didn’t know that Google were doing model-based OCR and content extraction.

> on the off chance that the tech involved is in any way interesting, it tends to have decades of security research behind it applied to mathematically guarantee we can't use it for anything ourselves

My current profession and hobby is literally breaking these locks and I’m still not quite sure what you mean here. What interesting tech do you feel you can’t use or apply due to security research?

> there's also decades of legal experience applied to stop people from building on top of the tech.

Again… I’m genuinely curious what technology you feel is locked up in a legal and technical vault?

I feel that we’ve really been in a good age lately for fundamental technologies, honestly - a massive amount of AI research is published, almost all computing related sub-technologies I can think of are growing increasingly strong open-source and open-research communities (semiconductors all the way from PDK through HDL and synthesis are one space that’s been fun here recently), and with a few notable exceptions (3GPP/mobile wireless being a big one), fewer cutting edge concepts are patent encumbered than ever before.

> There are other things to be curious about, that aren't caused by attempts to disenfranchise regular computer users.

If anything I feel like this is a counter-example? It’s an innocuous and valuable feature with a bug in it. There’s nothing weird or evil going on to intentionally or even unintentionally disenfranchise users. It’s something with a feature toggle that’s happing in open source code.

> it's one thing to fix bugs for the companies back when they tried or pretended to be friendly

Here, we can agree. If a company are going to ship automatic updates, they need to be more careful about regressions than this, and they don’t deserve any benefit of the doubt on that.

11. _Algernon_ ◴[] No.43515378[source]
Learned helplessness is a common symptom of abuse. Not surprising that we would see it here as well.
12. mystified5016 ◴[] No.43515585[source]
> it’s not some evil data slurp

This puts a dangerous amount of trust onto a company which has very clearly and explicitly signaled to everyone for decades that they do not care one iota about you, your privacy, or your safety.

Assuming that Google isn't doing anything malicious is a very unwise and ill-informed stance to take. If it isn't malicious now, it will be very soon. Absolutely no exceptions.

13. josephg ◴[] No.43517922{3}[source]
I was too. There was a period of great optimism around the web, Google and (in part), Apple. But it was closer to 20 years ago now.

I remember talking to someone from Microsoft around that time. (Who were an enemy of the opensource world at the time). They said the shine would wear off, and everyone would get annoyed and distrustful of Google too. I remember my conscious brain agreeing. But my emotional mind loved Google - we all did. I just couldn’t imagine it.

Well. It’s pretty easy to imagine now.

replies(1): >>43525297 #
14. bri3d ◴[] No.43525297{4}[source]
I think the fall happened a long time ago. It's funny - I'm recently accused often of wearing rose-colored glasses on HN, but I think the present is actually quite a bit better than 15 years ago was when it comes to privacy. It's easy to forget how bad things were for a while in there.

15 years ago now I think Google were at their worst. Google were doing a good job in my eyes until roughly the time of the DoubleClick acquisition, when they pivoted away from "we're going to do ads the Good Way with AdWords" and into "screw it, we're just going to do ads," picked up the infamous DoubleClick cookie and their general "we profile people using every piece of data we can possibly think of" approach, and started making insane product decisions like public-contacts-by-default Google Buzz.

Since then, through a combination of courts forcing them to and what seems like a somewhat genuine internal effort, Google have been adding privacy controls back in many places. I certainly don't agree with the model still, but I think that Google in 2025 are actually much less of a privacy threat than 2010 Google were.

Outside Google, 15 years ago was also the peak Browser Toolbar and Installer Wrapper Infostealers era, where instead of building crypto scams or AI-wrapper companies, the hustle bros were busy building flat-out spyware instead.

I know I'm outside of the majority on HN recently, but I generally feel that the corporate _notion_ of user privacy has actually gotten a lot better since the early 2000s, while the _implementation_ has gotten worse. That is to say, companies, especially large ones, care much more about internal controls and have much less of a "we steal lots of data and figure out how to sell it later" model. Unfortunately, at the same time, we've seen the rise of "data driven" product management, always on updates, and "product telemetry," which erode the new attitude towards privacy at a technical level by building easily exploitable troves of sensitive information.

Of course, in exchange for large companies becoming more conscious about privacy, we now have a million smaller companies working to fill the "we steal all the data" shoes. It's still a battle that's far from won.