←back to thread

139 points stubish | 4 comments | | HN request time: 0s | source
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
bigfatkitten ◴[] No.44439891[source]
> The premise is that better moderation is available and despite that, literally no one is choosing to do it.

It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.

Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.

https://archive.is/8dq8q

replies(1): >>44439988 #
AnthonyMouse ◴[] No.44439988[source]
Your link says the opposite of what you claim:

> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.

Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.

Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.

Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.

replies(1): >>44440049 #
bigfatkitten ◴[] No.44440049[source]
> Your link says the opposite of what you claim

I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.

Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.

> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.

> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.

> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.

replies(1): >>44440106 #
AnthonyMouse ◴[] No.44440106[source]
Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.

As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.

What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.

replies(4): >>44440299 #>>44440514 #>>44440754 #>>44440858 #
riffraff ◴[] No.44440514[source]
But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.

Facebook and its ilk have massive profits, they can afford more moderators.

replies(1): >>44440634 #
AnthonyMouse ◴[] No.44440634[source]
> But this goes back to the original argument: maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.

> Facebook and its ilk have massive profits, they can afford more moderators.

They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?

replies(1): >>44440849 #
fc417fc802 ◴[] No.44440849{3}[source]
> By this principle the government can't operate the criminal justice system

Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.

It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

replies(1): >>44441061 #
AnthonyMouse ◴[] No.44441061{4}[source]
> Obviously we make case by case decisions regarding such things.

You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.

> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.

The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?

Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?

replies(1): >>44446478 #
1. fc417fc802 ◴[] No.44446478{5}[source]
That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

replies(1): >>44449422 #
2. AnthonyMouse ◴[] No.44449422[source]
> That doesn't follow. The absolute of a principle being unobtainable doesn't mean it isn't of use. As I stated, you make a case by case judgment when applying it. That you aren't satisfied by the imperfection doesn't imply a lack of usefulness.

The principle was that if you can't operate without doing harm, you can't operate.

But then nobody can operate, including the government.

If you give up that absolutist principle and concede that there are trade offs in everything, that's the status quo and there's nothing to fix. They already have the incentive to spend a reasonable amount of resources to remove those users, because they don't want them. The unfortunacy is that spending a reasonable amount of resources doesn't fully get rid of them, and spending an unreasonable amount of resources (or making drastic trade offs against false positives) is unreasonable.

> I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.

It's not about whether some specific user exceeds the threshold. You have a reporting system and some double-digit percentage of users will use it as an "I disagree with this poster's viewpoint" button. Competitors will use it to try to take down the competition's legitimate content. Criminal organizations will create fake accounts or use stolen credentials and use the reporting system to extort people into paying ransom or the fake accounts will mass report the victim's account, and then if even a small percentage of the fake reports make it through the filter, the victim loses their account. Meanwhile there are legitimate reports in there as well.

You would then need enough human moderators to thoroughly investigate every one of those reports, taking into account context and possibly requiring familiarity with the specific account doing the posting to determine whether it was intended as satire or sarcasm. The accuracy has to be well in excess of 99% or you're screwed, because even a 1% false positive rate means the extortion scheme is effective because they file 1000 fake reports and the victim's account gets 10 strikes against it, and a 1% false negative rate means people make 1000 legitimate reports and they take down 990 of them but each of the 10 they got wrong has a story written about it in the newspaper.

Banning the accounts posting the actual illegal content is what they already do, but those people just make new accounts. Banning the accounts of honest people who get a lot of fake reports makes the problem worse, because it makes it easier to do the extortion scheme and then more criminals do it.

> If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.

But that was the original issue -- if you exempt smaller services then smaller services get a competitive advantage, and then you're back to the services people actually using not being required to do aggressive moderation. The only benefit then is that you got the services to become smaller, and if that's the goal then why not just do it directly and pass a law capping entity size?

replies(1): >>44450184 #
3. fc417fc802 ◴[] No.44450184[source]
You're being dense. The original statement was:

> maybe if you can't avoid causing harm then you shouldn't be allowed to operate?

That isn't plausible to interpret as an absolute. The tradeoff is implied - as far as I can tell there isn't any other reasonable interpretation. It follows that the contextual implication is that the status quo is one of excessive harm.

Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.

To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.

Responding to reports doesn't take anywhere near as much effort as you're making out. The situation with the large centralized networks is analogous to a company that keeps cutting its IT budget while management loudly complains that it's simply impossible to get reliable infrastructure in this day and age without spending an excessive amount.

> that's the goal then why not just do it directly and pass a law capping entity size?

Because that's (quite obviously) not the goal. To date smaller venues have very good track records in my personal experience. The idea being floated was that the centralized services that actively manipulate the behavior of large portions of the population either improve theirs or be removed from the market.

replies(1): >>44453130 #
4. AnthonyMouse ◴[] No.44453130{3}[source]
> That isn't plausible to interpret as an absolute.

It's inaccurate rhetoric, is the point. You would have to say "maybe if you can't avoid causing excessive harm you shouldn't be allowed to operate" in order to have a reasonable statement, but then you would be inviting the valid criticism that "excessive harm" isn't what's currently happening. And dodging that criticism by eliding the qualifier is the thing I'm not inclined to let someone get away with.

> Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.

But do they disagree based on some kind of logical reasoning or evidence, or because they have a general feeling of wanting to protect kids, which can't tell you whether any given proposal to do so will cost more than it's worth?

> To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.

Which has two problems. First, if someone reports something which is on the line and you decide that it's not over the line even though it was close, you're going to ban the person who reported it? And second, the people submitting false reports as a business model don't care about getting banned because they'll just open more accounts, or they were using compromised accounts to begin with and then you're banning the accounts of innocent people who have had their machines infected with malware.

> Responding to reports doesn't take anywhere near as much effort as you're making out.

Responding to reports with high accuracy absolutely does require a large amount of resources. Consider the most common system that actually tries to do that -- and even then still often gets it wrong -- is the court system. You can't even get within two orders of magnitude of that level of resources per report while still calling it a feasible amount of resources to foist onto a private party as an unfunded mandate.

> Because that's (quite obviously) not the goal.

If not propping up megacorps is a goal -- and it should be -- then encouraging smaller services is a rewording of that goal. If you exempt smaller services and that causes smaller services to take over, the result is that the services that take over are exempt. And when that's going to be the result then you can remove the unnecessary indirection.

> To date smaller venues have very good track records in my personal experience.

One of the ways they do this is that smaller services generally have a niche, and then depending on what that niche is, they can avoid a lot of this trouble because the nature of their audience doesn't attract it.

This site is a good example. Discussion of highly contentious debates is heavily suppressed, the site doesn't support posting images or videos and the audience is such that only a specific set of topics will get any traction.

Which is fine if that's what you're looking for, and there is a place for that, but services with a different focus will attract different elements and then have more of a problem. And saying "well just don't host any of that" is the false positives problem. Should there be nowhere that can host contentious political debates or where adults can express their sexuality?

The large sites have these problems because they're general purpose and thereby attract and include all kinds of things. If you split things into special-purpose sites while still expecting the sum of them to provide full coverage then some of them can avoid the problems by limiting their scope, but then the other ones have to do it and you've only moved the problem to a different place instead of actually solving it.