Most active commenters
  • alganet(12)
  • __MatrixMan__(7)
  • lotsofpulp(4)
  • afiodorov(3)
  • vntok(3)
  • makeitdouble(3)
  • GLdRH(3)

←back to thread

Are we the baddies?

(geohot.github.io)
693 points AndrewSwift | 55 comments | | HN request time: 0.671s | source | bottom
Show context
afiodorov ◴[] No.44478380[source]
We should not underestimate the timeless human response to being manipulated: disengagement.

This isn't theoretical, it's happening right now. The boom in digital detoxes, the dumbphone revival among young people, the shift from public feeds to private DMs, and the "Do Not Disturb" generation are all symptoms of the same thing. People are feeling the manipulation and are choosing to opt out, one notification at a time.

replies(5): >>44478542 #>>44478752 #>>44479222 #>>44479422 #>>44483888 #
1. alganet ◴[] No.44478542[source]
> disengagement.

That disengagement metric is valuable, I'm not gonna give it away for free anymore. I'll engage and disengage randomly, so no one knows what works.

> The boom in digital detoxes, the dumbphone revival among young people

That's a market now. It doesn't mean shit. It's a "lifestyle".

> People are feeling the manipulation

They don't. Even manipulation awareness is a market now. I'm sure there are YouTubers who thrive on it.

---

How far can you game a profiling algorithm? Can you make it think something about you that you're not? How much can one break it?

Those are the interesting questions.

replies(7): >>44478588 #>>44478617 #>>44478622 #>>44481127 #>>44482261 #>>44482563 #>>44483782 #
2. notarobot123 ◴[] No.44478588[source]
The Algorithm doesn't care if you're illegible. How ever much you mess with it, you're still its plaything.
replies(3): >>44478701 #>>44479126 #>>44480571 #
3. jagrsw ◴[] No.44478617[source]
> I'll engage and disengage randomly, so no one knows what works.

Any predictable pattern, including when you disengage, is just another feature for the pricing model. If the model learns you reliably leave after 3 hours, it will simply front-load the surge pricing into that initial window.

  Analysis: This user loses disengages during 75% of the
  time and belongs to a group of 5% who do the same. The
  expected revenue for this group over a longer period
  and with multiple users is 24% lower than for the
  average user.

  Action: Since 80% of theirs engagements last for at
  least 12 hours, ads should be shown and prices
  increased only within the first three hours.
Hope this helps :)
replies(2): >>44479829 #>>44483407 #
4. afiodorov ◴[] No.44478622[source]
There's nothing an algorithm can do against disciplined, intentional engagement.

If you know which car you want to buy it doesn't matter what the salesman has to say.

replies(4): >>44478795 #>>44479376 #>>44480492 #>>44480861 #
5. properclass ◴[] No.44478701[source]
ineligible or illegible?
replies(1): >>44478711 #
6. immibis ◴[] No.44478711{3}[source]
illegible.

https://en.m.wikipedia.org/wiki/Seeing_Like_a_State#Summary

7. vntok ◴[] No.44478795[source]
What car you want to buy is just one tiny part of the transaction. The salesman can and will manipulate you on everything else from price to warranty, from payment schedule to cross-sale rebates, from maintenance subscription to registration fees, from additional options to spare tires.
replies(1): >>44478886 #
8. afiodorov ◴[] No.44478886{3}[source]
You're right, they can try to manipulate you on a thousand tiny things. My counter-argument is that at a certain point, it's not worth the mental energy to fight over what amounts to pennies on the dollar.

Anecdotally, when I bought my car recently, they forgot to even offer me the extended warranty they'd planned to push. I find it funny to think it was so minor, even they forgot to care.

replies(2): >>44479389 #>>44481340 #
9. pdimitar ◴[] No.44479126[source]
The algorithm still can't make me buy or read rage-bait.

Of course the machine will never stop trying. But with results decreasing gradually with time, the human will get discouraged and will turn it off. It happens at places, btw.

10. latexr ◴[] No.44479376[source]
> If you know which car you want to buy it doesn't matter what the salesman has to say.

Sure it does. The salesman may have information you were not aware of. They could even tell you something which satisfies your needs better and is cheaper. Not all salesman are out to screw you, some really care about a happy customer.

I’m reminded of an old Hypercritical episode. If you ever heard John Siracusa, you know he does his research and knows what he wants. Yet when it came time buy a TV, which he had intensively researched, the salesman mentioned plasma and how the tech had improved and it threw a wrench in Siracusa’s whole decision.

https://overcast.fm/+AA3EXrnIDrA/1:23:08

replies(2): >>44480477 #>>44483313 #
11. ctxc ◴[] No.44479389{4}[source]
Tangential, but I think most extended warranties I've noticed are beneficial. Even last month I was kicking myself for forgetting to extend a 2 year warranty which costs 1/4th the one time repair cost I had to cough up.
replies(2): >>44479632 #>>44479992 #
12. blincoln ◴[] No.44479632{5}[source]
Are you sure the extended warranty would have covered it?

I paid for an extended warranty on the first car I ever bought. Turned out it didn't cover any of the things the salesperson cited as good reasons to pay for it, and to maintain the warranty, I'd have to pay the dealer for all maintenance - even oil changes.

That car never needed any repairs, but seeing the list of exclusions convinced me to never pay for an extended warranty again.

13. alamortsubite ◴[] No.44479829[source]
At which point the user disengages from the platform permanently. Great work.
14. lotsofpulp ◴[] No.44479992{5}[source]
> but I think most extended warranties I've noticed are beneficial.

If this were true, it would result in a loss for the issuer of the warranty.

replies(2): >>44483384 #>>44483834 #
15. denkmoon ◴[] No.44480477{3}[source]
How intensively can you possibly have researched if a salesman mentioning an entire category of display technology is a curve ball for you.
replies(1): >>44483559 #
16. makeitdouble ◴[] No.44480492[source]
The salesman can cut the car you want from your buying options, or stick conditions on it that will make up for the difference with the other models.

That's what we're seeing with Youtube for instance: your choice is to pay Youtube's price for Premium (litteraly paying to not get bullied), sit through all the ads in the world, or get three strikes after playing the ad-blocking cat and mouse game for long enough.

Of course you're still free to go somewhere else, in a world where even public guides and presentations will often be pushed on youtube only, to alleviate for the bandwidth costs on standard web services.

replies(2): >>44480806 #>>44480955 #
17. alganet ◴[] No.44480571[source]
I'm not trying to shake it off.
18. jen20 ◴[] No.44480806{3}[source]
> The salesman can cut the car you want from your buying options, or stick conditions on it that will make up for the difference with the other models.

My favourite approach to this is to write an email to all dealerships within the radius I’m willing to go, explaining what I want, then “publicly” make them bid for my business in a thread with their peers. I’ve had it work several times now.

19. alganet ◴[] No.44480861[source]
I wonder if there are other kinds of profiling algorithms not related to sales.
replies(1): >>44484019 #
20. fc417fc802 ◴[] No.44480955{3}[source]
> get three strikes after playing the ad-blocking cat and mouse game for long enough

I've never encountered this. What is it?

replies(2): >>44481354 #>>44483796 #
21. __MatrixMan__ ◴[] No.44481127[source]
> How far can you game a profiling algorithm?

I think pretty far. I expect the future involves nonsense layer full of AI slop being read and written by AI's. Mapping it onto the actual humans will be difficult unless you have a preexisting trust relationship with those humans such that they can decrypt the slop into your actual communications.

replies(1): >>44481309 #
22. alganet ◴[] No.44481309[source]
I think it's more difficult than it seems.

If I were an algorithm-profiling company I would try to anchor my profilings in the real world (what kinds of people I talk to, about what, what kinds of places I visit, what are the opinions of others on me, etc). LLM garbage would be just to draw voluntary participation in potential surveys.

It takes a particularly paranoid and stubborn individual to make the necessary efforts to consider what kinds of profiling could be done with such anchored data, and even more effort to probe it enough to acquire some knowledge on how it works.

replies(1): >>44482657 #
23. alganet ◴[] No.44481340{4}[source]
> it's not worth the mental energy to fight over what amounts to pennies

Maybe it's not about the money. Maybe I see it challenging profiling algorithms as entertainment.

24. makeitdouble ◴[] No.44481354{4}[source]
https://support.google.com/youtube/answer/14129599?hl=en

Depending on when you look at it, it might be worked around or fully enforced again, it comes and go, but at least Youtube doesn't seem willing to give up that stance entirely.

25. spacemadness ◴[] No.44482261[source]
Have you read anything by Mark Fisher? He spoke about capitalism absorbing all resistance which makes it almost impossible to ever escape from. Which is what you’re saying I think. Resistance becomes the next market and works through the same economic systems it’s attempting to resist.
replies(1): >>44482879 #
26. Levitating ◴[] No.44482563[source]
> The boom in digital detoxes, the dumbphone revival among young people

>> That's a market now. It doesn't mean shit. It's a "lifestyle".

The fact that there's such a market now, means something on it's own I believe. Regardless if it's a "lifestyle", it's a lifestyle people are choosing now. I know more and more people who either don't own a smartphone or have it on DnD at all times.

It's the same for "manipulation awareness" or whatever. You can't will a market into existence, the market has to already exist because people are drawn to it.

I am not saying that it will matter in the end, but I can say for a fact that there are people consciously disengaging from social media.

replies(1): >>44487485 #
27. __MatrixMan__ ◴[] No.44482657{3}[source]
I agree that it's currently paranoid to hide your activities so that the algorithm profiling companies see you as several different people, or see the activities of millions as if they're just one... Automated misdirection on the part of the users is, so far, minimal.

But the point of such a company is to sell data on individuals, and a problematic use case for such data is to kill the ones who say things that you don't like. As that becomes cheaper and easier to do I think we'll find that it's not so paranoid to hide.

replies(1): >>44482849 #
28. alganet ◴[] No.44482849{4}[source]
Maybe the endgame is honesty. Not pretending you're several people, or other convoluted ways of misdirection and disguise.

Just honestly acknowledge that profiling exists, and explicitly work against it.

That should be enough to make any algorithm company that notices something is wrong to trip on its own wires, thinking some more elaborate form of hackery/covertion is being employed.

The likelyhood of some company noticing a single user is quite low though, but if you are able to hook even a single person inside that company using nothing but honesty and no tricks, that is the best trick of all.

replies(1): >>44482998 #
29. willturman ◴[] No.44482879[source]
David Foster Wallace made a similar argument about Television being able to absorb, re-contextualize, and subsequently market any effort opposed to it as a cause of malignant addiction and abdication of societal responsibilities in his essay E. Unibus Pluram.

Today you can probably substitute television for YouTube, TikTok, etc, but the argument still holds up, perhaps better than ever.

replies(1): >>44483399 #
30. __MatrixMan__ ◴[] No.44482998{5}[source]
My proposal wasn't that individuals should juggle accounts and cookies in a machine-vs-human game of cat and mouse, but rather that we should rewrite the protocols we use to play that game for us. I don't think there's anything dishonest about that--it's just that making computers lie to each other is the honest work of protecting well-meaning humans from malicious humans.

Do you have a different sort of explicitly working against profiling that you had in mind?

replies(1): >>44484613 #
31. const_cast ◴[] No.44483313{3}[source]
The heuristic is that pretty much anyone who is trying to sell you something is out to screw you. They are probably lying, and they are probably trying to get a quick buck from people who don't know better. And when I say anyone, I mean anyone. Youtubers, anyone on TikTok who links anything, advertisements on the web, billboards.

It's not 100% but if you follow the heuristic you save a lot of money and generally have higher quality goods in your life.

The reason we got into this mess is because advertising on the internet broke all the rules. Now lying is de-facto allowed. So then everyone else followed suit to compete. If your competitors are lying, you cannot afford not to lie.

So now all advertisement and sales are compromised and should not be trusted. Even large, previously trusted corporations are running scams in America. Professionals making six figures are acting as con artists. It's unbelievable how fast the situation has deteriorated.

replies(1): >>44483797 #
32. GLdRH ◴[] No.44483384{6}[source]
That's not how insurance works
replies(1): >>44490810 #
33. GLdRH ◴[] No.44483399{3}[source]
It's sad he can't witness the death of television
34. GLdRH ◴[] No.44483407[source]
He said randomly, which means the opposite of predictable or reliable. Sometimes he won't disengage for years and the algorithm would be non the wiser!
35. latexr ◴[] No.44483559{4}[source]
> How intensively can you possibly have researched

Listen to it. Start way before the given timestamp.

36. AnimalMuppet ◴[] No.44483782[source]
> Those are the interesting questions.

Not to me. I don't want to manipulate the manipulators. I just want to not be manipulated. I want to be able to go through my day without having to fight off manipulation in order to do and be what I want to do and be.

The goal is my freedom, not to "stick it to the man" in some way that won't actually matter to them.

37. bmicraft ◴[] No.44483796{4}[source]
People using subpar ad blockers mostly, or more than one.
replies(1): >>44485842 #
38. ryandrake ◴[] No.44483797{4}[source]
Exactly. People say "Oh, I'm not affected by advertising, and I ignore sales pitches. I am very smart and do my own research!" But what are they researching? Marketing literature! They think they are informing themselves, but in reality, they are just seeking out marketing disguised as impartial facts.

I laugh when people say "I use site:reddit.com to scope my Google searches for product information because I'm getting impartial information from real people."

39. vntok ◴[] No.44483834{6}[source]
Interesting. Can you expand a bit on what your reasoning is so we can understand where you come from?
replies(1): >>44490667 #
40. felurx ◴[] No.44484019{3}[source]
Depends on your definition of sale, but influencing elections comes to mind.
41. alganet ◴[] No.44484613{6}[source]
Just being honest, mostly.

I'll say things I know will get me downvoted.

I'll criticize things that could benefit me if I think they're manipulative.

I won't do alt accounts even if everyone does it.

I think most surveillance and advertisement relies on social dynamics. I attempt to play the algorithms but not the people. Sometimes it will get misunderstood, and that's fine.

replies(1): >>44486280 #
42. makeitdouble ◴[] No.44485842{5}[source]
To note, what u-block is doing to workaround this is far from the trival "just block the ad" behavior, and I expect it to break again (perhaps within weeks ?)

Then the waltz will go on as usual, until the ads are straight baked into the video-feed with the server refusing to serve the rest of the content on a per-client base.

43. __MatrixMan__ ◴[] No.44486280{7}[source]
Ah, well I hope that sharing your honest opinion about it turns out to be an effective strategy. But I'm afraid we'll regret not interfering more directly than that.
replies(1): >>44486664 #
44. alganet ◴[] No.44486664{8}[source]
I think surveillance is very, very advanced. But the active meddling thing is old tricks.

One should consider this combination. You can't lie to some systems, the better strategy is to be honest. You can lie to some systems, but these won't be load bearing, so why do it?

I will also observe back. The active meddling thing, when observed in action, is a source of information. It could be lying to me too, predicting that observability is inevitable and camouflaging it. Of course, I could be predicting that as well (and so on).

Notice how many interesting scenarios exist even if honesty is considered as a viable strategy in a total surveillance hypothetical situation?

replies(1): >>44493357 #
45. alganet ◴[] No.44487485[source]
> there are people consciously disengaging from social media

There are people _voluntarily_ disengaging. It is different.

We're talking about manipulation, you have to consider the possibility of unconscious decisions.

46. lotsofpulp ◴[] No.44490667{7}[source]
I guess I should have specified financial benefit.

You wouldn't pay someone else to insure a common vegetable, because it is so low cost that if it turned out to be bad, you would just buy another one (or have bought extra as your insurance).

When you buy from Walmart/Target/Amazon/Best Buy, they will try to sell you insurance for a $30 toaster or other cheap appliance. Again, most people will not buy this because they will believe the appliance will work sufficiently long or that the warranty process will be too time consuming, or otherwise decide that just quickly replacing the cheap appliance with another is the preferred way to insure it.

The insurance seller is a business and has to earn more than what they pay out for claims (or at least to make payroll if it is a mutual insurance company). Otherwise, they are going to lose money over time and go out of business. If you financially benefit from it, then you are either lucky, or had an information edge over the insurance underwriter.

Of course, if you get peace of mind from buying insurance, and count that as a benefit, then most insurance is beneficial in that case.

replies(1): >>44497769 #
47. lotsofpulp ◴[] No.44490810{7}[source]
I meant financially benefit. See

https://news.ycombinator.com/item?id=44490667

Insurance seller has to earn at least enough for payroll, so at least some of the premiums go towards that instead of any money received from claims.

Investment earnings cancel out because both the insurance buyer and insurance seller have access to same returns via broad market index funds. I.e. you can self insure and get the same returns on your savings that the insurance seller is going to get if you gave them a premium.

48. __MatrixMan__ ◴[] No.44493357{9}[source]
> The active meddling thing, when observed in action, is a source of information.

Not if it's done right. If one person views a page the old-fashioned way, caches the DOM, and circulates it peer-to-peer, then whoever is weaponizing that content only has one browser fingerprint to work with, despite there being potentially thousands of users that they wish they could profile.

That's far less information to work with than the thousands of individual requests they would otherwise have to scrutinize.

The honest/dishonest distinction only comes down to whether you're going to try to protect the volunteer who grabbed the page to begin with, or whether you're going to expose them to retribution.

As for the systems you can't lie to, those you can replace with more trustworthy alternatives. This is a lot of work but it's better than suggesting that your peers be honest in ways that will harm them.

So to answer your question, no. None of the scenarios where you let your adversary know that you're working against them, and also let them know how to find and harm you, are interesting strategies for combatting surveillance.

Surveillance exists in support of targeted coercion. We should not make a target of the more honest among us. We need to protect those people most of all.

replies(1): >>44494866 #
49. alganet ◴[] No.44494866{10}[source]
We're talking about different things.

You need to imagine a surveillance system that you cannot lie to, and cannot avoid or replace. It will be there, no way of escaping it. Sattelites, network monitoring, doesn't matter. Assume it exists.

Anyone in control of such hypothetical systems can act upon the surveillance information to manipulate a target (not only observe it). This could be done in several ways. LLM bots encouraging you to volunteer information, gaslighting, etc.

The load bearing component of such surveillance systems are _not_ these actors (LLMs, bots, etc). It's _the need for surveillance_.

What encourages a society to produce surveillance in the first place? Catching bad guys, protecting people, etc. I'm not saying that I agree with it, it is just that this is the way it works.

Anyone doing shady things is a reason for surveillance to exist. Lying is one of those things, making LLM bots is one of those things. Therefore, to target the load bearing aspect of surveillance, I need to walk in a straight line (I won't deploy LLM bots, create alt accounts, etc). There should be no reason to surveil me, unless whoever is in control is some kind of dictator or developed some kind of fixation on me (then, it's their problem not mine).

I can do simple things, like watching videos I don't particularly like, or post nonsense creative stories in a blog, or just simple things designed to hide nothing (they're playful, no intent). Why does someone cares about what I post in a blog that no one visits? Why does someone care about the videos I watch? If someone starts to act on those things, it is because I'm being surveiled. They're honeypots for surveillance, there's nothig behind them.

With those, I can more easily see whoever is trying to act upon my public information by marking it. They will believe they're targeting my worldviews or preferences or personality, but they're actually "marked with high-visibility paint". In fact, I leave explicit notes about it, like "do not interact with this stuff". Only automated surveillance systems (unable to reason) or fanatic stalkers (unable to reason) would ignore those clear warnings.

This strategy is, like I mentioned, mostly based on honesty. It targets the load bearing aspects of surveilance (the need for it), by making it useless and unecessary (why are you surveiling me? I see how you are acting upon it).

It's not about making honest people targets, it's about making surveillance useless.

replies(1): >>44496779 #
50. __MatrixMan__ ◴[] No.44496779{11}[source]
I suppose we are. I generally assume that someone, somewhere, has something to hide: something that benefits me if they're allowed to keep it hidden. History is full of these characters, they keep the establishment in check, prevent it from treating the rest of us too badly.

If the powers that be could know with certainty that all of us planned to behave and would never move against them (or could neutralize those who had been honest about their intent to do so), then I think things would be much worse for the common folk than they are now. It's hard not to see your strategy as a path to that outcome.

replies(1): >>44496973 #
51. alganet ◴[] No.44496973{12}[source]
> keep the establishment in check

The ultimate subversion of the estabilishment is raw honesty. Honesty produces an environment that disables unjust distribution of power.

> never move against them

That's inaction, not honesty.

> It's hard not to see your strategy as a path to that outcome.

That's ok, my strategy does not require you to understand it. I don't need to create informative material or convince people.

replies(1): >>44497275 #
52. __MatrixMan__ ◴[] No.44497275{13}[source]
> Honesty produces an environment that disables unjust distribution of power

How does this work?

replies(1): >>44501063 #
53. vntok ◴[] No.44497769{8}[source]
That is really not how the insurance seller's business model works.

Think about it this way: on a given year, they are collecting "Sales" amount of money from their pool of customers. For the insurer to make a profit, the amount reimbursed to legit claims simply has to be less than Sales-Expenses that year, which basically translates to having Z customers claims on any given year where Z << NbOfCustomers.

So it's a bit like a Ponzi scheme, whereby you can benefit as a customer if you pay year 1 and get a claim during year 1 or 2 for example, and the insurer can benefit too if many customers pay "a year in advance" (money that can be invested) before having their claims fall on years 2 or 3 (or never).

replies(1): >>44499428 #
54. lotsofpulp ◴[] No.44499428{9}[source]
The customers can earn investment returns just like the insurance seller, so you have to reduce foregone returns from the insurance buyer’s benefit so it ends up canceling out.

>For the insurer to make a profit, the amount reimbursed to legit claims simply has to be less than Sales-Expenses that year, which basically translates to having Z customers claims on any given year where Z << NbOfCustomers.

That inequality does not “basically translate”. Insurance sellers have to exist for multiple years, not just 1 year.

If every single year, “customer claims” are less than the net benefit of customers, which is what I think you wrote although it is hard to interpret, then your “net benefit of customers” includes a non cash component (such as feeling secure)”.

There is never a free lunch, and the insurance business is not at all like a Ponzi scheme (that’s the whole point of actuaries performing calculations…to ensure sustainability without an ever growing income stream).

55. alganet ◴[] No.44501063{14}[source]
In theory, I shouldn't need to explain this.