Can’t say I blame them.
Can’t say I blame them.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?
> Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.
> Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
I'm still not clear on how they're supposed to do that.
The general shape of the problem looks like this:
If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.
If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.
The ID law, sure, I doubt the proponents of it care which alternative comes to pass (ID checks or market exit) since I expect they're opposed to the service to begin with. But that law has no size carveout, I didn't use it as an example, and I don't think it's a good law. So we're likely in agreement regarding it.
> Would the former be any different?
I expect so, yes. You've constructed a dichotomy where heavy handed moderation and failure to moderate effectively are the only possible outcomes. That seems like ideologically motivated helplessness to me.
I'm also not entirely clear what we're talking about anymore. The proposed law has to do with ID checks, the sentiment expressed was "if you don't moderate for yourselves the government will impose on you", and somehow we've arrived at you confidently claiming that decent moderation is unattainable. Yet you haven't specified the price range nor the criteria being adhered to.
The point you raise about federated networks is an interesting one, however it remains to be seen if such networks exhibit the same dynamics that centralized ones do. In the absence of a profit driven incentive for an algorithm that farms engagement we don't yet know if the same social ills will be present.