←back to thread

568 points layer8 | 3 comments | | HN request time: 0s | source
Show context
ineedasername ◴[] No.45768131[source]
I’m continually astounded that so many people, faced with a societal problem, reflexively turn to “Hmmm, perhaps if we monitored and read and listened to every single thing that every person does, all of the time…”

As though it would 1) be a practical possibility and 2) be effective.

Compounding the issue is that the more technology can solve #1, the more these people fixate on it as the solution without regards to the lack of #2.

I wish there were a way, once and for all, to prevent this ridiculous idea from taking hold over and over again. If I could get a hold of such people when these ideas were in their infancy… perhaps I should monitor everything everyone does and watch for people considering the same as a solution to their problem… ah well, no, still don’t see how that follows logically as a reasonable solution.

replies(17): >>45768311 #>>45768758 #>>45768812 #>>45768845 #>>45768873 #>>45769030 #>>45769192 #>>45769801 #>>45769868 #>>45769961 #>>45770005 #>>45770264 #>>45770801 #>>45770827 #>>45771089 #>>45772424 #>>45776034 #
usernomdeguerre ◴[] No.45768311[source]
The issue is that there is a place where this model ~is working. It's in China and Russia. The GFW, its Russian equivalent, and the national security laws binding all of their tech companies and public discussion do exactly these things in a way that has allowed their leadership to go unchallenged for decades now.

The rest of the world isn't stupid or silly for suggesting these policies. They're following a proven effective model for the outcomes they are looking for.

We do ourselves a disservice by acting like there is some inherent flaw in it.

replies(18): >>45768496 #>>45768599 #>>45768644 #>>45769338 #>>45769392 #>>45769722 #>>45770019 #>>45770099 #>>45770285 #>>45770405 #>>45770530 #>>45770788 #>>45771104 #>>45771169 #>>45771319 #>>45771623 #>>45771694 #>>45774191 #
1. npteljes ◴[] No.45771319[source]
EDIT: I improved my comprehension, and it looks like I agree actually, not disagree.

I agree, it's a great, proven tool to do away with political enemies, and to selectively enforce the law, for whatever motivation.

I just don't understand what you mean by

>We do ourselves a disservice by acting like there is some inherent flaw in it.

We (as in, "the people") don't do any disservice for us by opposing such an effort. Specifically because we are also looking at what goes on in Russia and China to name a few. Authoritarian regimes do "work", but don't, generally, want that kind of working over here in Europe for example.

replies(1): >>45772085 #
2. Jordan-117 ◴[] No.45772085[source]
I think they meant it's a disservice to act like these panopticons are inefficient/ineffective and thus not a real threat. Even current-gen AI plus mass surveillance would make it trivially easy to build dossiers and trawl communications for specific ideas.
replies(1): >>45772316 #
3. npteljes ◴[] No.45772316[source]
Thanks for the clarification, it went over my head. Re-reading the comment chain multiple times it's now clear that OP was alluding to the ulterior motive, and the ulterior motive being effective, which I agree with. Again, thanks for taking the time to clarify.