Can’t say I blame them.
Can’t say I blame them.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.