←back to thread

139 points stubish | 1 comments | | HN request time: 0s | source
Show context
jackvalentine ◴[] No.44439355[source]
Australians are broadly supportive of these kind of actions - there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

Can’t say I blame them.

replies(3): >>44439415 #>>44439676 #>>44439817 #
AnthonyMouse ◴[] No.44439817[source]
> there is a view that foreign internet behemoths have failed to moderate for themselves and will therefore have moderation imposed on them however imperfect.

This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.

But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.

replies(5): >>44439891 #>>44439944 #>>44440013 #>>44440547 #>>44441786 #
bigfatkitten ◴[] No.44439891[source]
> The premise is that better moderation is available and despite that, literally no one is choosing to do it.

It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.

Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.

https://archive.is/8dq8q

replies(1): >>44439988 #
AnthonyMouse ◴[] No.44439988[source]
Your link says the opposite of what you claim:

> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.

Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.

Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.

Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.

replies(1): >>44440049 #
bigfatkitten ◴[] No.44440049[source]
> Your link says the opposite of what you claim

I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.

Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.

> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.

> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.

> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”

> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.

replies(1): >>44440106 #
AnthonyMouse ◴[] No.44440106[source]
Your claim was that they "actively refuse" to do anything about it, but they clearly do actually take measures.

As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.

What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.

replies(4): >>44440299 #>>44440514 #>>44440754 #>>44440858 #
fc417fc802 ◴[] No.44440299[source]
Active refusal can (and commonly does) take the form of intentionally being unable to respond or merely putting on such an appearance. One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?

replies(1): >>44440592 #
AnthonyMouse ◴[] No.44440592[source]
> One of the curious things about Twitter pre-aquisition was that underage content somewhat frequently stayed up for months while discriminatory remarks were generally taken down rapidly. Post acquisition such content seemed to disappear approximately overnight.

This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.

replies(1): >>44440803 #
fc417fc802 ◴[] No.44440803{3}[source]
I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.

This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.

replies(1): >>44440902 #
1. AnthonyMouse ◴[] No.44440902{4}[source]
> I agree that on its own it isn't evidence of the ability to respond without excessive false positives. But similarly, it isn't evidence of an inability to do so either.

It is in combination with the high rate of false positives, unless you think the false positives were intentional.

> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.

If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.

What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.