←back to thread

Are we the baddies?

(geohot.github.io)
692 points AndrewSwift | 7 comments | | HN request time: 0.001s | source | bottom
Show context
afiodorov ◴[] No.44478380[source]
We should not underestimate the timeless human response to being manipulated: disengagement.

This isn't theoretical, it's happening right now. The boom in digital detoxes, the dumbphone revival among young people, the shift from public feeds to private DMs, and the "Do Not Disturb" generation are all symptoms of the same thing. People are feeling the manipulation and are choosing to opt out, one notification at a time.

replies(5): >>44478542 #>>44478752 #>>44479222 #>>44479422 #>>44483888 #
alganet ◴[] No.44478542[source]
> disengagement.

That disengagement metric is valuable, I'm not gonna give it away for free anymore. I'll engage and disengage randomly, so no one knows what works.

> The boom in digital detoxes, the dumbphone revival among young people

That's a market now. It doesn't mean shit. It's a "lifestyle".

> People are feeling the manipulation

They don't. Even manipulation awareness is a market now. I'm sure there are YouTubers who thrive on it.

---

How far can you game a profiling algorithm? Can you make it think something about you that you're not? How much can one break it?

Those are the interesting questions.

replies(7): >>44478588 #>>44478617 #>>44478622 #>>44481127 #>>44482261 #>>44482563 #>>44483782 #
__MatrixMan__ ◴[] No.44481127[source]
> How far can you game a profiling algorithm?

I think pretty far. I expect the future involves nonsense layer full of AI slop being read and written by AI's. Mapping it onto the actual humans will be difficult unless you have a preexisting trust relationship with those humans such that they can decrypt the slop into your actual communications.

replies(1): >>44481309 #
alganet ◴[] No.44481309[source]
I think it's more difficult than it seems.

If I were an algorithm-profiling company I would try to anchor my profilings in the real world (what kinds of people I talk to, about what, what kinds of places I visit, what are the opinions of others on me, etc). LLM garbage would be just to draw voluntary participation in potential surveys.

It takes a particularly paranoid and stubborn individual to make the necessary efforts to consider what kinds of profiling could be done with such anchored data, and even more effort to probe it enough to acquire some knowledge on how it works.

replies(1): >>44482657 #
__MatrixMan__ ◴[] No.44482657[source]
I agree that it's currently paranoid to hide your activities so that the algorithm profiling companies see you as several different people, or see the activities of millions as if they're just one... Automated misdirection on the part of the users is, so far, minimal.

But the point of such a company is to sell data on individuals, and a problematic use case for such data is to kill the ones who say things that you don't like. As that becomes cheaper and easier to do I think we'll find that it's not so paranoid to hide.

replies(1): >>44482849 #
alganet ◴[] No.44482849[source]
Maybe the endgame is honesty. Not pretending you're several people, or other convoluted ways of misdirection and disguise.

Just honestly acknowledge that profiling exists, and explicitly work against it.

That should be enough to make any algorithm company that notices something is wrong to trip on its own wires, thinking some more elaborate form of hackery/covertion is being employed.

The likelyhood of some company noticing a single user is quite low though, but if you are able to hook even a single person inside that company using nothing but honesty and no tricks, that is the best trick of all.

replies(1): >>44482998 #
__MatrixMan__ ◴[] No.44482998[source]
My proposal wasn't that individuals should juggle accounts and cookies in a machine-vs-human game of cat and mouse, but rather that we should rewrite the protocols we use to play that game for us. I don't think there's anything dishonest about that--it's just that making computers lie to each other is the honest work of protecting well-meaning humans from malicious humans.

Do you have a different sort of explicitly working against profiling that you had in mind?

replies(1): >>44484613 #
alganet ◴[] No.44484613[source]
Just being honest, mostly.

I'll say things I know will get me downvoted.

I'll criticize things that could benefit me if I think they're manipulative.

I won't do alt accounts even if everyone does it.

I think most surveillance and advertisement relies on social dynamics. I attempt to play the algorithms but not the people. Sometimes it will get misunderstood, and that's fine.

replies(1): >>44486280 #
__MatrixMan__ ◴[] No.44486280[source]
Ah, well I hope that sharing your honest opinion about it turns out to be an effective strategy. But I'm afraid we'll regret not interfering more directly than that.
replies(1): >>44486664 #
1. alganet ◴[] No.44486664[source]
I think surveillance is very, very advanced. But the active meddling thing is old tricks.

One should consider this combination. You can't lie to some systems, the better strategy is to be honest. You can lie to some systems, but these won't be load bearing, so why do it?

I will also observe back. The active meddling thing, when observed in action, is a source of information. It could be lying to me too, predicting that observability is inevitable and camouflaging it. Of course, I could be predicting that as well (and so on).

Notice how many interesting scenarios exist even if honesty is considered as a viable strategy in a total surveillance hypothetical situation?

replies(1): >>44493357 #
2. __MatrixMan__ ◴[] No.44493357[source]
> The active meddling thing, when observed in action, is a source of information.

Not if it's done right. If one person views a page the old-fashioned way, caches the DOM, and circulates it peer-to-peer, then whoever is weaponizing that content only has one browser fingerprint to work with, despite there being potentially thousands of users that they wish they could profile.

That's far less information to work with than the thousands of individual requests they would otherwise have to scrutinize.

The honest/dishonest distinction only comes down to whether you're going to try to protect the volunteer who grabbed the page to begin with, or whether you're going to expose them to retribution.

As for the systems you can't lie to, those you can replace with more trustworthy alternatives. This is a lot of work but it's better than suggesting that your peers be honest in ways that will harm them.

So to answer your question, no. None of the scenarios where you let your adversary know that you're working against them, and also let them know how to find and harm you, are interesting strategies for combatting surveillance.

Surveillance exists in support of targeted coercion. We should not make a target of the more honest among us. We need to protect those people most of all.

replies(1): >>44494866 #
3. alganet ◴[] No.44494866[source]
We're talking about different things.

You need to imagine a surveillance system that you cannot lie to, and cannot avoid or replace. It will be there, no way of escaping it. Sattelites, network monitoring, doesn't matter. Assume it exists.

Anyone in control of such hypothetical systems can act upon the surveillance information to manipulate a target (not only observe it). This could be done in several ways. LLM bots encouraging you to volunteer information, gaslighting, etc.

The load bearing component of such surveillance systems are _not_ these actors (LLMs, bots, etc). It's _the need for surveillance_.

What encourages a society to produce surveillance in the first place? Catching bad guys, protecting people, etc. I'm not saying that I agree with it, it is just that this is the way it works.

Anyone doing shady things is a reason for surveillance to exist. Lying is one of those things, making LLM bots is one of those things. Therefore, to target the load bearing aspect of surveillance, I need to walk in a straight line (I won't deploy LLM bots, create alt accounts, etc). There should be no reason to surveil me, unless whoever is in control is some kind of dictator or developed some kind of fixation on me (then, it's their problem not mine).

I can do simple things, like watching videos I don't particularly like, or post nonsense creative stories in a blog, or just simple things designed to hide nothing (they're playful, no intent). Why does someone cares about what I post in a blog that no one visits? Why does someone care about the videos I watch? If someone starts to act on those things, it is because I'm being surveiled. They're honeypots for surveillance, there's nothig behind them.

With those, I can more easily see whoever is trying to act upon my public information by marking it. They will believe they're targeting my worldviews or preferences or personality, but they're actually "marked with high-visibility paint". In fact, I leave explicit notes about it, like "do not interact with this stuff". Only automated surveillance systems (unable to reason) or fanatic stalkers (unable to reason) would ignore those clear warnings.

This strategy is, like I mentioned, mostly based on honesty. It targets the load bearing aspects of surveilance (the need for it), by making it useless and unecessary (why are you surveiling me? I see how you are acting upon it).

It's not about making honest people targets, it's about making surveillance useless.

replies(1): >>44496779 #
4. __MatrixMan__ ◴[] No.44496779{3}[source]
I suppose we are. I generally assume that someone, somewhere, has something to hide: something that benefits me if they're allowed to keep it hidden. History is full of these characters, they keep the establishment in check, prevent it from treating the rest of us too badly.

If the powers that be could know with certainty that all of us planned to behave and would never move against them (or could neutralize those who had been honest about their intent to do so), then I think things would be much worse for the common folk than they are now. It's hard not to see your strategy as a path to that outcome.

replies(1): >>44496973 #
5. alganet ◴[] No.44496973{4}[source]
> keep the establishment in check

The ultimate subversion of the estabilishment is raw honesty. Honesty produces an environment that disables unjust distribution of power.

> never move against them

That's inaction, not honesty.

> It's hard not to see your strategy as a path to that outcome.

That's ok, my strategy does not require you to understand it. I don't need to create informative material or convince people.

replies(1): >>44497275 #
6. __MatrixMan__ ◴[] No.44497275{5}[source]
> Honesty produces an environment that disables unjust distribution of power

How does this work?

replies(1): >>44501063 #
7. alganet ◴[] No.44501063{6}[source]
In theory, I shouldn't need to explain this.