←back to thread

139 points stubish | 2 comments | | HN request time: 0.417s | source
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
Nursie ◴[] No.44440547[source]
> The fact is that moderation is hard

Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.

We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...

Oh no.

replies(2): >>44440705 #>>44442149 #
AnthonyMouse ◴[] No.44440705[source]
This is the weirdest theory. The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderators to contend with the professional-grade malicious content by profitable criminal syndicates, but some tiny forum is supposed to be able to get it perfect so they don't go to jail?
replies(2): >>44440792 #>>44440867 #
fc417fc802 ◴[] No.44440867[source]
> but some tiny forum is supposed to be able to get it perfect so they don't go to jail?

Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.

replies(1): >>44441201 #
AnthonyMouse ◴[] No.44441201[source]
So the companies that exceed the threshold couldn't operate there (e.g. PornHub has ceased operating in Texas) but then everyone just uses the smaller ones. Wouldn't it be simpler and less confusing to ban companies over a certain size unconditionally?
replies(1): >>44446176 #
fc417fc802 ◴[] No.44446176[source]
That's hardly a good faith interpretation of the goals behind the Texas law. Also HB 20 was social media deplatformimg, not identification.

Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

replies(1): >>44449187 #
1. AnthonyMouse ◴[] No.44449187[source]
> That's hardly a good faith interpretation of the goals behind the Texas law.

It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?

> Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.

> Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

I'm still not clear on how they're supposed to do that.

The general shape of the problem looks like this:

If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.

If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.

replies(1): >>44450065 #
2. fc417fc802 ◴[] No.44450065[source]
The intent behind HB 20, which has a size exemption and has not AFAIK driven anyone out of the market.

The ID law, sure, I doubt the proponents of it care which alternative comes to pass (ID checks or market exit) since I expect they're opposed to the service to begin with. But that law has no size carveout, I didn't use it as an example, and I don't think it's a good law. So we're likely in agreement regarding it.

> Would the former be any different?

I expect so, yes. You've constructed a dichotomy where heavy handed moderation and failure to moderate effectively are the only possible outcomes. That seems like ideologically motivated helplessness to me.

I'm also not entirely clear what we're talking about anymore. The proposed law has to do with ID checks, the sentiment expressed was "if you don't moderate for yourselves the government will impose on you", and somehow we've arrived at you confidently claiming that decent moderation is unattainable. Yet you haven't specified the price range nor the criteria being adhered to.

The point you raise about federated networks is an interesting one, however it remains to be seen if such networks exhibit the same dynamics that centralized ones do. In the absence of a profit driven incentive for an algorithm that farms engagement we don't yet know if the same social ills will be present.