Most active commenters
  • AnthonyMouse(5)
  • Nursie(4)
  • fc417fc802(3)

←back to thread

139 points stubish | 13 comments | | HN request time: 0.662s | source | bottom
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
1. Nursie ◴[] No.44440547[source]
> The fact is that moderation is hard

Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.

We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...

Oh no.

replies(2): >>44440705 #>>44442149 #
2. AnthonyMouse ◴[] No.44440705[source]
This is the weirdest theory. The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderators to contend with the professional-grade malicious content by profitable criminal syndicates, but some tiny forum is supposed to be able to get it perfect so they don't go to jail?
replies(2): >>44440792 #>>44440867 #
3. Nursie ◴[] No.44440792[source]
> The premise is that you admit the huge corporations with billions of dollars don't have the resources to pay moderator

My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.

Further, that I would not lament such a passing.

I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.

Find a way, or stop operating that service.

replies(1): >>44441248 #
4. fc417fc802 ◴[] No.44440867[source]
> but some tiny forum is supposed to be able to get it perfect so they don't go to jail?

Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.

replies(1): >>44441201 #
5. AnthonyMouse ◴[] No.44441201{3}[source]
So the companies that exceed the threshold couldn't operate there (e.g. PornHub has ceased operating in Texas) but then everyone just uses the smaller ones. Wouldn't it be simpler and less confusing to ban companies over a certain size unconditionally?
replies(1): >>44446176 #
6. AnthonyMouse ◴[] No.44441248{3}[source]
> My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.

Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?

Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.

And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.

> I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.

Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.

> Further, that I would not lament such a passing.

If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.

But if every social network was eliminated and replaced with nothing... not so much.

replies(2): >>44441403 #>>44441554 #
7. Nursie ◴[] No.44441403{4}[source]
> the cost of paying for the moderation would cut too much into their profits?

This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.

> And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.

Inability to do something properly and make a commercial success of it, is a 'you' problem.

Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)

To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".

OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?

> Should the small forums be able to get away with it though?

This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.

> if every social network was eliminated and replaced with nothing... not so much.

Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.

I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....

8. pferde ◴[] No.44441554{4}[source]
Smaller forums are more likely to handle moderation effectively and in a timely manner. I frequent a few such forums, and have seen consistently good moderating for many years.
9. account42 ◴[] No.44442149[source]
Throwing a lot of manpower at moderation only gets you lots of little emperors that try to enforce their own views on others.
replies(1): >>44451609 #
10. fc417fc802 ◴[] No.44446176{4}[source]
That's hardly a good faith interpretation of the goals behind the Texas law. Also HB 20 was social media deplatformimg, not identification.

Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

replies(1): >>44449187 #
11. AnthonyMouse ◴[] No.44449187{5}[source]
> That's hardly a good faith interpretation of the goals behind the Texas law.

It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?

> Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.

Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.

> Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.

I'm still not clear on how they're supposed to do that.

The general shape of the problem looks like this:

If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.

If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.

replies(1): >>44450065 #
12. fc417fc802 ◴[] No.44450065{6}[source]
The intent behind HB 20, which has a size exemption and has not AFAIK driven anyone out of the market.

The ID law, sure, I doubt the proponents of it care which alternative comes to pass (ID checks or market exit) since I expect they're opposed to the service to begin with. But that law has no size carveout, I didn't use it as an example, and I don't think it's a good law. So we're likely in agreement regarding it.

> Would the former be any different?

I expect so, yes. You've constructed a dichotomy where heavy handed moderation and failure to moderate effectively are the only possible outcomes. That seems like ideologically motivated helplessness to me.

I'm also not entirely clear what we're talking about anymore. The proposed law has to do with ID checks, the sentiment expressed was "if you don't moderate for yourselves the government will impose on you", and somehow we've arrived at you confidently claiming that decent moderation is unattainable. Yet you haven't specified the price range nor the criteria being adhered to.

The point you raise about federated networks is an interesting one, however it remains to be seen if such networks exhibit the same dynamics that centralized ones do. In the absence of a profit driven incentive for an algorithm that farms engagement we don't yet know if the same social ills will be present.

13. Nursie ◴[] No.44451609[source]
What if the view is "advertising should be within the law of the country it's being displayed in and not contain fake celebrity endorsements or blatant lies"?

Because AFAICT some of the big platforms are failing at this, before we even get into content moderation.

> Throwing a lot of manpower at moderation only gets you lots of little emperors that try to enforce their own views on others.

Do you consider dang a 'little emperor'? If anything HN seems proof that communities can thrive with moderation.