←back to thread

139 points stubish | 1 comments | | HN request time: 0.206s | source
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
bigfatkitten ◴[] No.44439891[source]
> The premise is that better moderation is available and despite that, literally no one is choosing to do it.

It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.

Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.

https://archive.is/8dq8q

replies(1): >>44439988 #
AnthonyMouse ◴[] No.44439988[source]
Your link says the opposite of what you claim:

> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.

Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.

Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.

Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.

replies(1): >>44440049 #
bigfatkitten ◴[] No.44440049[source]
> Your link says the opposite of what you claim

I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.

Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.

> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.

> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.

> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.

replies(1): >>44440106 #
AnthonyMouse ◴[] No.44440106[source]
Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.

As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.

What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.

replies(4): >>44440299 #>>44440514 #>>44440754 #>>44440858 #
bigfatkitten ◴[] No.44440858[source]
> but they clearly do actually take measures.

Some times, but clearly not often enough.

Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

> Then you report something real and it gets lost in an sea of fake reports.

It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.

> What are they supposed to do about that?

They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

replies(1): >>44440954 #
1. AnthonyMouse ◴[] No.44440954[source]
> Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?

That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.

> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?

Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.