Most active commenters
  • AnthonyMouse(11)
  • fc417fc802(5)
  • bigfatkitten(3)

←back to thread

139 points stubish | 17 comments | | HN request time: 1.94s | source | bottom
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
bigfatkitten ◴[] No.44439891[source]
> The premise is that better moderation is available and despite that, literally no one is choosing to do it.

It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.

Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.

https://archive.is/8dq8q

replies(1): >>44439988 #
AnthonyMouse ◴[] No.44439988[source]
Your link says the opposite of what you claim:

> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.

Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.

Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.

Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.

replies(1): >>44440049 #
bigfatkitten ◴[] No.44440049[source]
> Your link says the opposite of what you claim

I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.

Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.

> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.

> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.

> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.

replies(1): >>44440106 #
1. AnthonyMouse ◴[] No.44440106[source]
Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.

As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.

What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.

replies(4): >>44440299 #>>44440514 #>>44440754 #>>44440858 #
2. fc417fc802 ◴[] No.44440299[source]
Active refusal can (and commonly does) take the form of intentionally being unable to respond or merely putting on such an appearance. One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?

replies(1): >>44440592 #
3. riffraff ◴[] No.44440514[source]
But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.

Facebook and its ilk have massive profits, they can afford more moderators.

replies(1): >>44440634 #
4. AnthonyMouse ◴[] No.44440592[source]
> One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.

replies(1): >>44440803 #
5. AnthonyMouse ◴[] No.44440634[source]
> But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.

> Facebook and its ilk have massive profits, they can afford more moderators.

They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?

replies(1): >>44440849 #
6. coryrc ◴[] No.44440754[source]
> What are they supposed to do about that?

Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.

Now your rate of false positives is low enough to handle.

replies(1): >>44440853 #
7. fc417fc802 ◴[] No.44440803{3}[source]
I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.

This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.

replies(1): >>44440902 #
8. fc417fc802 ◴[] No.44440849{3}[source]
> By this principle the government can't operate the criminal justice system

Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.

It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

replies(1): >>44441061 #
9. AnthonyMouse ◴[] No.44440853[source]
This is the post people should point to when someone says "slippery slope is a fallacy" in order to prove them wrong, both for the age verification requirements and for making banks do KYC.

But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.

And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.

10. bigfatkitten ◴[] No.44440858[source]
> but they clearly do actually take measures.

Some times, but clearly not often enough.

Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

> Then you report something real and it gets lost in an sea of fake reports.

It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.

> What are they supposed to do about that?

They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

replies(1): >>44440954 #
11. AnthonyMouse ◴[] No.44440902{4}[source]
> I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

It is in combination with the high rate of false positives, unless you think the false positives were intentional.

> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.

If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.

What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.

12. AnthonyMouse ◴[] No.44440954[source]
> Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.

> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.

13. AnthonyMouse ◴[] No.44441061{4}[source]
> Obviously we make case by case decisions regarding such things.

You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.

> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?

Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?

replies(1): >>44446478 #
14. fc417fc802 ◴[] No.44446478{5}[source]
That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

replies(1): >>44449422 #
15. AnthonyMouse ◴[] No.44449422{6}[source]
> That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

The principle was that if you can't operate without doing harm, you can't operate.

But then nobody can operate, including the government.

If you give up that absolutist principle and concede that there are trade offs in everything, that's the status quo and there's nothing to fix. They already have the incentive to spend a reasonable amount of resources to remove those users, because they don't want them. The unfortunacy is that spending a reasonable amount of resources doesn't fully get rid of them, and spending an unreasonable amount of resources (or making drastic trade offs against false positives) is unreasonable.

> I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

It's not about whether some specific user exceeds the threshold. You have a reporting system and some double-digit percentage of users will use it as an "I disagree with this poster's viewpoint" button. Competitors will use it to try to take down the competition's legitimate content. Criminal organizations will create fake accounts or use stolen credentials and use the reporting system to extort people into paying ransom or the fake accounts will mass report the victim's account, and then if even a small percentage of the fake reports make it through the filter, the victim loses their account. Meanwhile there are legitimate reports in there as well.

You would then need enough human moderators to thoroughly investigate every one of those reports, taking into account context and possibly requiring familiarity with the specific account doing the posting to determine whether it was intended as satire or sarcasm. The accuracy has to be well in excess of 99% or you're screwed, because even a 1% false positive rate means the extortion scheme is effective because they file 1000 fake reports and the victim's account gets 10 strikes against it, and a 1% false negative rate means people make 1000 legitimate reports and they take down 990 of them but each of the 10 they got wrong has a story written about it in the newspaper.

Banning the accounts posting the actual illegal content is what they already do, but those people just make new accounts. Banning the accounts of honest people who get a lot of fake reports makes the problem worse, because it makes it easier to do the extortion scheme and then more criminals do it.

> If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

But that was the original issue -- if you exempt smaller services then smaller services get a competitive advantage, and then you're back to the services people actually using not being required to do aggressive moderation. The only benefit then is that you got the services to become smaller, and if that's the goal then why not just do it directly and pass a law capping entity size?

replies(1): >>44450184 #
16. fc417fc802 ◴[] No.44450184{7}[source]
You're being dense. The original statement was:

> maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

That isn't plausible to interpret as an absolute. The tradeoff is implied - as far as I can tell there isn't any other reasonable interpretation. It follows that the contextual implication is that the status quo is one of excessive harm.

Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.

To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.

Responding to reports doesn't take anywhere near as much effort as you're making out. The situation with the large centralized networks is analogous to a company that keeps cutting its IT budget while management loudly complains that it's simply impossible to get reliable infrastructure in this day and age without spending an excessive amount.

> that's the goal then why not just do it directly and pass a law capping entity size?

Because that's (quite obviously) not the goal. To date smaller venues have very good track records in my personal experience. The idea being floated was that the centralized services that actively manipulate the behavior of large portions of the population either improve theirs or be removed from the market.

replies(1): >>44453130 #
17. AnthonyMouse ◴[] No.44453130{8}[source]
> That isn't plausible to interpret as an absolute.

It's inaccurate rhetoric, is the point. You would have to say "maybe if you can't avoid causing excessive harm you shouldn't be allowed to operate" in order to have a reasonable statement, but then you would be inviting the valid criticism that "excessive harm" isn't what's currently happening. And dodging that criticism by eliding the qualifier is the thing I'm not inclined to let someone get away with.

> Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.

But do they disagree based on some kind of logical reasoning or evidence, or because they have a general feeling of wanting to protect kids, which can't tell you whether any given proposal to do so will cost more than it's worth?

> To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.

Which has two problems. First, if someone reports something which is on the line and you decide that it's not over the line even though it was close, you're going to ban the person who reported it? And second, the people submitting false reports as a business model don't care about getting banned because they'll just open more accounts, or they were using compromised accounts to begin with and then you're banning the accounts of innocent people who have had their machines infected with malware.

> Responding to reports doesn't take anywhere near as much effort as you're making out.

Responding to reports with high accuracy absolutely does require a large amount of resources. Consider the most common system that actually tries to do that -- and even then still often gets it wrong -- is the court system. You can't even get within two orders of magnitude of that level of resources per report while still calling it a feasible amount of resources to foist onto a private party as an unfunded mandate.

> Because that's (quite obviously) not the goal.

If not propping up megacorps is a goal -- and it should be -- then encouraging smaller services is a rewording of that goal. If you exempt smaller services and that causes smaller services to take over, the result is that the services that take over are exempt. And when that's going to be the result then you can remove the unnecessary indirection.

> To date smaller venues have very good track records in my personal experience.

One of the ways they do this is that smaller services generally have a niche, and then depending on what that niche is, they can avoid a lot of this trouble because the nature of their audience doesn't attract it.

This site is a good example. Discussion of highly contentious debates is heavily suppressed, the site doesn't support posting images or videos and the audience is such that only a specific set of topics will get any traction.

Which is fine if that's what you're looking for, and there is a place for that, but services with a different focus will attract different elements and then have more of a problem. And saying "well just don't host any of that" is the false positives problem. Should there be nowhere that can host contentious political debates or where adults can express their sexuality?

The large sites have these problems because they're general purpose and thereby attract and include all kinds of things. If you split things into special-purpose sites while still expecting the sum of them to provide full coverage then some of them can avoid the problems by limiting their scope, but then the other ones have to do it and you've only moved the problem to a different place instead of actually solving it.