←back to thread

361 points gloxkiqcza | 1 comments | | HN request time: 0.309s | source
Show context
torginus ◴[] No.45011561[source]
I genuinely do not understand where how the idea of building a total surveillance police state, where all speech is monitored, can even as much as seriously be considered by an allegedly pro-democracy, pro-human rights government, much less make it into law.

Also:

Step 1: Build mass surveillance to prevent the 'bad guys' from coming into political power (its ok, we're the good guys).

Step 2: Your political opponents capitalize on your genuinely horrific overreach, and legitimize themselves in the eyes of the public as fighting against tyranny (unfortunately for you they do have a point). They promise to dismantle the system if coming to power.

Step 3: They get elected.

Step 4: They don't dismantle the system, now the people you planned to use the system against are using it against you.

Sounds brilliant, lets do this.

replies(17): >>45011763 #>>45011799 #>>45011932 #>>45012205 #>>45012358 #>>45012512 #>>45012976 #>>45013249 #>>45013303 #>>45013857 #>>45014035 #>>45014477 #>>45014527 #>>45014559 #>>45016358 #>>45020627 #>>45021408 #
IanCal ◴[] No.45012976[source]
I'm not a fan of the OSA but proponents of it will *keep winning* if you *keep misrepresenting it*.

You can, and should, argue about the effects but the core of the OSA and how it can be sold is this, at several different levels:

One, most detailed.

Sites that provide user to user services have some level of duty of care to their users, like physical sites and events.

They should do risk assessments to see if their users are at risk of getting harmed, like physical sites and events.

They should implement mitigations based on those risk assessments. Not to completely remove all possibility of harm, but to lower it.

For example, sites where kids can talk to each other in private chats should have ways of kids reporting adults and moderators to review those reports. Sites where you can share pictures should check for people sharing child porn (if you have a way of a userbase sharing encrypted images with each other anonymously, you're going to get child porn on there). Sites aimed at adults with public conversations like some hobby site with no history of issues and someone checking for spam/etc doesn't need to do much.

You should re-check things once a year.

That's the selling point - and as much as we can argue about second order effects (like having a list of IDs and what you've watched, overhead etc), those statements don't on the face of it seem objectionable.

Two, shorter.

Sites should be responsible about what they do just like shops and other spaces, with risk assessments and more focus when there are kids involved.

Three, shortest.

Facebook should make sure people aren't grooming your kids.

Now, the problem with talking about " a total surveillance police state, where all speech is monitored," is where does that fit into the explanations above? How do you explain that to even me, a highly technical, terminally online nerd who has read at least a decent chunk of the actual OFCOM guidelines?

replies(5): >>45013874 #>>45014156 #>>45014350 #>>45014492 #>>45015325 #
ang_cire ◴[] No.45015325[source]
> They should do risk assessments to see if their users are at risk of getting harmed, like physical sites and events.

The problem is when one group wants to impose their definition of harm on everyone else, saying that everyone else shouldn't be allowed to be 'harmed' even if they don't consider it as such. In the UK this is not unique to the OSA discussion(see the UK's anti-trans turn), and but it is very relevant.

replies(1): >>45016644 #
1. IanCal ◴[] No.45016644[source]
This is a very valid point and importantly one of the more detailed issues.

So it's a good one to start with when arguing against the OSA - you say harm but what does that mean? What must sites assume it could mean? And examples of helpful kinds of things that would fall under at least the risk of getting caught out.