Can’t say I blame them.
Can’t say I blame them.
This view is manufactured. The premise is that better moderation is available and despite that, literally no one is choosing to do it. The fact is that moderation is hard and in particular excluding all actually bad things without also having a catastrophically high false positive rate is infeasible.
But the people who are the primary victims of the false positives and the people who want the bad stuff fully censored aren't all the same people, and then the second group likes to pretend that there is a magic solution that doesn't throw the first group under the bus, so they can throw the first group under the bus.
It’s worse than that. Companies actively refuse to do anything about content that is reported to them directly, at least until the media kicks up a stink.
Nobody disputes that reliably detecting bad content is hard, but doing nothing about bad content you know about is inexcusable.
The actual goal is, as always, complete control over what Australians can see and do on the internet, and complete knowledge of what we see and do on the internet.
> Meta said it has in the past two years taken down 27 pedophile networks and is planning more removals.
Moreover, the rest of the article is describing the difficulty in doing moderation. If you make a general purpose algorithm that links up people with similar interests and then there is a group of people with an interest in child abuse, the algorithm doesn't inherently know that and if you push on it to try to make it do something different in that case than it does in the general case, the people you're trying to thwart will actively take countermeasures like using different keywords or using coded language.
Meanwhile user reporting features are also full of false positives or corporate and political operatives trying to have legitimate content removed, so expecting them to both immediately and perfectly respond to every report is unreasonable.
Pretending that this is easy to solve is the thing authoritarians do to justify steamrolling innocent people because nobody can fully eliminate the problem nobody has any good way to fully eliminate.
Manufactured by whom? Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The magic solution is if you can't operate at scale safely, don't operate at scale.
I don’t know where you got that from. Meta’s self-congratulatory takedown of “27 pedophile networks” is a drop in the ocean.
Here’s a fairly typical example of them actively deciding to do nothing in response to a report. This mirrors my own experience.
> Like other platforms, Instagram says it enlists its users to help detect accounts that are breaking rules. But those efforts haven’t always been effective.
> Sometimes user reports of nudity involving a child went unanswered for months, according to a review of scores of reports filed over the last year by numerous child-safety advocates.
> Earlier this year, an anti-pedophile activist discovered an Instagram account claiming to belong to a girl selling underage-sex content, including a post declaring, “This teen is ready for you pervs.” When the activist reported the account, Instagram responded with an automated message saying: “Because of the high volume of reports we receive, our team hasn’t been able to review this post.”
> After the same activist reported another post, this one of a scantily clad young girl with a graphically sexual caption, Instagram responded, “Our review team has found that [the account’s] post does not go against our Community Guidelines.” The response suggested that the user hide the account to avoid seeing its content.
https://en.wikipedia.org/wiki/Manufacturing_Consent
> Moderation was done very tightly on vbulletin forums back in the day, the difference is Facebook/Google et al expect to operate at a scale where (they claim) moderation can't be done.
The difference isn't the scale of Google, it's the scale of the internet.
Back in the day the internet was full of university professors and telecommunications operators. Now it has Russian hackers and an entire battalion of shady SEO specialists.
If you want to build a search engine that competes with Google, it doesn't matter if you have 0.1% of the users and 0.001% of the market cap, you're still expected to index the whole internet. Which nobody could possibly do by hand anymore.
As mentioned, the issue is that they get zillions of reports and vast numbers of them are organized scammers trying to get them to take down legitimate content. Then you report something real and it gets lost in an sea of fake reports.
What are they supposed to do about that? It takes far fewer resources to file a fake report than investigate one and nobody can drink the entire ocean.
Edit: you can’t just grow a Wikipedia link to manufacturing consent from the 80s as an explanation here. What a joke of a position. Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
p.s. i agree with your comment.
Do you dispute the thesis of the book? Moral panics have always been used to sell both newspapers and bad laws.
> Maybe people have been hoodwinked by a media conspiracy or maybe they just don’t like what the kids are exposed to at a young age these days.
People have never liked what kids are exposed to. But it rather matters whether the proposed solution has more costs than effectiveness.
> Maybe search is dead but doesn’t know it yet.
Maybe some people who prefer the cathedral to the bazaar would prefer that. But ability of the public to discover anything outside of what the priests deign to tell them isn't something we should give up without a fight.
I put it to you, similarly without evidence, that your support for unfettered filth freedom is the result of a process of manufacturing consent now that American big tech dominates.
If the system is pathologically unable to deal with false reports to the extent that moderation has effectively ground to a standstill perhaps the regulator ought to get involved at that point and force the company to either change its ways or go out of business trying?
E.g. if you produce eggs and you can't avoid salmonella at some point your operation should be shut down.
Facebook and its ilk have massive profits, they can afford more moderators.
Meanwhile morals panics are at least as old as the Salem Witch Trials.
Moderation is hard when you prioritise growth and ad revenue over moderation, certainly.
We know a good solution - throw a lot of manpower at it. That may not be feasible for the giant platforms...
Oh no.
This isn't evidence that they have a system for taking down content without a huge number of false positives. It's evidence that the previous administrators of Twitter were willing to suffer a huge number of false positives around accusations of racism and the current administrators are willing to suffer them around accusations of underaged content.
By this principle the government can't operate the criminal justice system anymore because it has too many false positives and uncaptured negative externalities and then you don't have anything to use to tell Facebook to censor things.
> Facebook and its ilk have massive profits, they can afford more moderators.
They have large absolute profits because of the large number of users but the profit per user is in the neighborhood of $1/month. How much human moderation do you think you can get for that?
Do like banks: Know Your Customer. If someone performs a crime using your assets, you are required to supply evidence to the police. You then ban the person from using your assets. If someone makes false claims, ban that person from making reports.
Now your rate of false positives is low enough to handle.
My contention is more that they don’t have the will, because it would impact profits and that it’s possible that if they did implement effective moderation at scale it might hurt their bottom line so much they are unable to keep operating.
Further, that I would not lament such a passing.
I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Find a way, or stop operating that service.
In the context of Australia objecting to lack of moderation I'm not sure it matters. It seems reasonable for a government to set minimum standards which companies that wish to operate within their territory must abide by. If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market. Or perhaps they would instead choose to charge users for the service? Either outcome would make room for fairly priced local alternatives to gain traction.
This seems like a case of free trade enabling an inferior American product to be subsidized by the vendor thereby undercutting any potential for a local industry. The underlying issue feels roughly analogous to GDPR except that this time the legislation is terrible and will almost certainly make society worse off in various ways if it passes.
Obviously we make case by case decisions regarding such things. There are plenty of ways in which governments could act that populations in the west generally deem unacceptable. Private prisons in the US, for example, are quite controversial at present.
It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
But also, your proposal would deter people from reporting crimes because they're not only hesitant to give randos or mass surveillance corporations their social security numbers, they may fear retaliation from the criminals if it leaks.
And the same thing happens for people posting content -- identity verification is a deterrent to posting -- which is even worse than a false positive because it's invisible and you don't have the capacity to discover or address it.
Some times, but clearly not often enough.
Does a refusal get more active than a message that says “Our review team has found that [the account’s] post does not go against our Community Guidelines”?
> Then you report something real and it gets lost in an sea of fake reports.
It didn’t get ‘lost’ — they (or their contract content moderators at Concentrix in the Phillipines) sat on it, and then sent a message that said they had decided to not do anything about it.
> What are they supposed to do about that?
They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Typically you would exempt smaller services from such legislation. That's the route Texas took with HB 20.
It is in combination with the high rate of false positives, unless you think the false positives were intentional.
> If as you claim (and I doubt) the current way of doing things is uneconomical under those requirements then perhaps it would be reasonable for those products to be excluded from the Australian market.
If they actually required both removal of all offending content and a low false positive rate (e.g. by allowing customers to sue them for damages for removals of lawful content) then the services would exit the market because nobody could do that.
What they'll typically do instead is accept the high false positive rate rather than leave the market, and then the service remains but becomes plagued by innocent users being victimized by capricious and overly aggressive moderation tactics. But local alternatives couldn't do any better under the same constraints, so you're still stuck with a trash fire.
That's assuming their "review team" actually reviewed it before sending that message and purposely chose to allow it to stay up knowing that it was a false negative. But that seems pretty unlikely compared to the alternative where the reviewers were overwhelmed and making determinations without doing a real review, or doing one so cursory the error was done blind.
> They’ve either looked at the content and decided to do nothing about it, or they’ve lied when they said that they had, and that it didn’t breach policy. Which do you suppose it was?
Almost certainly the second one. What would even be their motive to do the first one? Pedos are a blight that can't possibly be generating enough ad revenue through normal usage to make up for all the trouble they are, even under the assumption that the company has no moral compass whatsoever.
You can make case by case decisions regarding individual aspects of the system, but no modern criminal justice system exists that has never put an innocent person behind bars, much less on trial. Fiddling with the details can get you better or worse but it can't get you something that satisfies the principle that you can't operate if you can't operate without ever doing any harm to anyone. Which implies that principle is unreasonable and isn't of any use in other contexts either.
> It's worth noting that if the regulator actually enforces requirements then they become merely a cost of doing business that all participants are subject to. Such a development in this case could well mean that all the large social platforms operating within the Australian market start charging users in that region on the order of $30 per year to maintain an account.
The premise there is that you could solve the problem for $30 per person annually, i.e. $2.50/month. I'm left asking the question again, how much human moderation do you expect to get for that?
Meanwhile, that's $30 per service. That's going to increase the network effect of any existing service because each additional recurring fee or requirement to submit payment data is a deterrent to using another one. And maybe the required fee would be more than that. Are you sure you want to entrench the incumbents as a permanent oligarchy?
Is the theory supposed to be that the moderation would cost them users, or that the cost of paying for the moderation would cut too much into their profits?
Because the first one doesn't make a lot of sense, the perpetrators of these crimes are a trivial minority of their user base that inherently cost more in trouble than they're worth in revenue.
And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
> I’m not saying tiny forums are some sort of panacea, merely that huge operations should not be able to get away with (for example) blatant fraudulent advertising on their platforms, on the basis that “we can’t possibly look at all of it”.
Should the small forums be able to get away with it though? Because they're the ones even more likely to be operating with a third party ad network they neither have visibility into nor have the leverage to influence.
> Further, that I would not lament such a passing.
If Facebook was vaporized and replaced with some kind of large non-profit or decentralized system or just a less invasive corporation, would I cheer? Probably.
But if every social network was eliminated and replaced with nothing... not so much.
This one. Not just in terms of needing to take on staff, but it would also cut into their bottom line in terms of not being able to take money from bad-faith operators.
> And the problem with the second one is that the cost of doing it properly would not only cut into the bottom line but put them deep into the red on a permanent basis, and then it's not so much a matter of unwillingness but inability.
Inability to do something properly and make a commercial success of it, is a 'you' problem.
Take meta and their ads - they've built a system in which it's possible to register and upload ads and show them to users, more or less instantly with more or less zero human oversight. There are various filters to try and catch stuff, but they're imperfect, so they supply fraudulent ads to their users all the time - fake celebrity endorsements, various things that fall foul of advertising standards. Some just outright scams. (Local family store you never heard of is closing down! So sad! Buy our dropshipped crap from aliexpress at 8x the price!)
To properly, fully fix this they would need to verify advertisers and review ads before they go live. This is going to slow down delivery, require a moderate sized army of reviewers and it's going to lose them revenue from the scammers. So many disincentives. So they say "This is impossible", but what they mean is "It is impossible to comply with the law and continue to rake in the huge profits we're used to". They may even mean "It is impossible to comply with the law and continue to run facebook".
OK, that's a classic 'you' problem. (Or it should be). It's not really any different to "My chemical plant can't afford to continue to operate unless I'm allowed to dump toxic byproducts in the river". OK, you can't afford to operate, and if you keep doing it anyway, we're going to sanction you. So ... Bye then?
> Should the small forums be able to get away with it though?
This is not really part of my argument. I don't think they should, no. But again - if they can't control what's being delivered through their site and there's evidence it contravenes the law, that's a them problem and they should stop using those third party networks until the networks can show they comply properly.
> if every social network was eliminated and replaced with nothing... not so much.
Maybe it's time to find a new funding model. It's bad enough having a funding model based on advertising. I's worse having one based on throwing ad messages at people cheap and fast without even checking they meets basic legal standards. But here we are.
I realise this whole thing is a bit off-topic as the discussion is about age-verification and content moderation, and I've strayed heavily into ad models....
Read it. It is specifically targeting companies who currently run riot over young individual's digital identity, flog it off to marketers, and treat them as a product.
It will also make it harder for the grubby men in their 30s and 40s to groom 14yo girls on Snapchat, which is a bonus.
Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.
If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.
It's plausible that it wasn't what some of the supporters intended, but that was the result, and the result wasn't entirely unpredictable. And it plausibly is what some of the supporters intended. When PornHub decided to leave Texas, do you expect they counted it as a cost or had a celebration?
> Notice that the goalposts shifted subtly from moderation of disallowed content to distribution of age restricted content. The latter isn't amendable to size based criteria for obvious reasons.
Would the former be any different? Sites over the threshold are forced to do heavy-handed moderation, causing them to have a significant competitive disadvantage over sites below the threshold, so then the equilibrium shifts to having a larger number of services that each fit below the threshold. Which doesn't even necessarily compromise the network effect if they're federated services so that the network size is the set of all users using that protocol even if none of the operators exceed the threshold.
> Note that I don't think the various ID laws are good ideas. I don't even think they're remotely capable of accomplishing their stated goals. Whereas I do expect that it's possible to moderate a given platform decently well if the operator is made to care.
I'm still not clear on how they're supposed to do that.
The general shape of the problem looks like this:
If you leave them to their own devices, they have the incentive to spend a balanced amount of resources against the problem, because they don't actually want those users but it requires an insurmountable level of resources to fully shake them loose without severely impacting innocent people. So they make some efforts but those efforts aren't fully effective, and then critics point to the failures as if the trade-off doesn't exist.
If you require them to fully stamp out the problem by law, they have to use the draconian methods that severely impact innocent people, because the only remaining alternative is to go out of business. So they do the first one, which is bad.
The principle was that if you can't operate without doing harm, you can't operate.
But then nobody can operate, including the government.
If you give up that absolutist principle and concede that there are trade offs in everything, that's the status quo and there's nothing to fix. They already have the incentive to spend a reasonable amount of resources to remove those users, because they don't want them. The unfortunacy is that spending a reasonable amount of resources doesn't fully get rid of them, and spending an unreasonable amount of resources (or making drastic trade offs against false positives) is unreasonable.
> I expect you can get quite a bit of moderation for that price. If a given user is exceeding that then they are likely so problematic that you will want to ban them anyway. Speaking from personal experience, the vast majority of users never act in a way that requires attention in the first place.
It's not about whether some specific user exceeds the threshold. You have a reporting system and some double-digit percentage of users will use it as an "I disagree with this poster's viewpoint" button. Competitors will use it to try to take down the competition's legitimate content. Criminal organizations will create fake accounts or use stolen credentials and use the reporting system to extort people into paying ransom or the fake accounts will mass report the victim's account, and then if even a small percentage of the fake reports make it through the filter, the victim loses their account. Meanwhile there are legitimate reports in there as well.
You would then need enough human moderators to thoroughly investigate every one of those reports, taking into account context and possibly requiring familiarity with the specific account doing the posting to determine whether it was intended as satire or sarcasm. The accuracy has to be well in excess of 99% or you're screwed, because even a 1% false positive rate means the extortion scheme is effective because they file 1000 fake reports and the victim's account gets 10 strikes against it, and a 1% false negative rate means people make 1000 legitimate reports and they take down 990 of them but each of the 10 they got wrong has a story written about it in the newspaper.
Banning the accounts posting the actual illegal content is what they already do, but those people just make new accounts. Banning the accounts of honest people who get a lot of fake reports makes the problem worse, because it makes it easier to do the extortion scheme and then more criminals do it.
> If the law discriminates on size you don't end up with (or at least exacerbate) the oligarchy scenario. In fact it acts to counter network effects by economically incentivizing use of the smaller services.
But that was the original issue -- if you exempt smaller services then smaller services get a competitive advantage, and then you're back to the services people actually using not being required to do aggressive moderation. The only benefit then is that you got the services to become smaller, and if that's the goal then why not just do it directly and pass a law capping entity size?
The ID law, sure, I doubt the proponents of it care which alternative comes to pass (ID checks or market exit) since I expect they're opposed to the service to begin with. But that law has no size carveout, I didn't use it as an example, and I don't think it's a good law. So we're likely in agreement regarding it.
> Would the former be any different?
I expect so, yes. You've constructed a dichotomy where heavy handed moderation and failure to moderate effectively are the only possible outcomes. That seems like ideologically motivated helplessness to me.
I'm also not entirely clear what we're talking about anymore. The proposed law has to do with ID checks, the sentiment expressed was "if you don't moderate for yourselves the government will impose on you", and somehow we've arrived at you confidently claiming that decent moderation is unattainable. Yet you haven't specified the price range nor the criteria being adhered to.
The point you raise about federated networks is an interesting one, however it remains to be seen if such networks exhibit the same dynamics that centralized ones do. In the absence of a profit driven incentive for an algorithm that farms engagement we don't yet know if the same social ills will be present.
> maybe if you can't avoid causing harm then you shouldn't be allowed to operate?
That isn't plausible to interpret as an absolute. The tradeoff is implied - as far as I can tell there isn't any other reasonable interpretation. It follows that the contextual implication is that the status quo is one of excessive harm.
Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.
To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.
Responding to reports doesn't take anywhere near as much effort as you're making out. The situation with the large centralized networks is analogous to a company that keeps cutting its IT budget while management loudly complains that it's simply impossible to get reliable infrastructure in this day and age without spending an excessive amount.
> that's the goal then why not just do it directly and pass a law capping entity size?
Because that's (quite obviously) not the goal. To date smaller venues have very good track records in my personal experience. The idea being floated was that the centralized services that actively manipulate the behavior of large portions of the population either improve theirs or be removed from the market.
Because AFAICT some of the big platforms are failing at this, before we even get into content moderation.
> Throwing a lot of manpower at moderation only gets you lots of little emperors that try to enforce their own views on others.
Do you consider dang a 'little emperor'? If anything HN seems proof that communities can thrive with moderation.
(1) For the purposes of this Act, age-restricted social media platform means:
(a) an electronic service that satisfies the following conditions:
(i) the sole purpose, or a significant purpose, of the service is to enable online social interaction between 2 or more end-users;
(ii) the service allows end-users to link to, or interact with, some or all of the other end-users;
(iii) the service allows end-users to post material on the service;
(iv) such other conditions (if any) as are set out in the legislative rules; or
(b) an electronic service specified in the legislative rules;
but does not include a service mentioned in subsection (6).I see nothing in there that talks about young people, identities, flogging anything to marketers, or treating people as product.
I don't dispute that that happens. All I'm saying is that this act is not solving that problem, isn't intended to solve that problem, and is actually part of a larger push to censor the internet for Australians.
This act, as written, requires all interactive websites that are accessible to end-users in Australia to implement age restrictions. And in order to implement age restrictions they must remove anonymity. Which is the point.
It's inaccurate rhetoric, is the point. You would have to say "maybe if you can't avoid causing excessive harm you shouldn't be allowed to operate" in order to have a reasonable statement, but then you would be inviting the valid criticism that "excessive harm" isn't what's currently happening. And dodging that criticism by eliding the qualifier is the thing I'm not inclined to let someone get away with.
> Clearly others don't agree with your view that a "reasonable amount of resources" is being spent on the problem at present.
But do they disagree based on some kind of logical reasoning or evidence, or because they have a general feeling of wanting to protect kids, which can't tell you whether any given proposal to do so will cost more than it's worth?
> To your hand wringing about abuse of reports, nearly all of the smaller platforms I have participated on have treated that as some form of bannable offense.
Which has two problems. First, if someone reports something which is on the line and you decide that it's not over the line even though it was close, you're going to ban the person who reported it? And second, the people submitting false reports as a business model don't care about getting banned because they'll just open more accounts, or they were using compromised accounts to begin with and then you're banning the accounts of innocent people who have had their machines infected with malware.
> Responding to reports doesn't take anywhere near as much effort as you're making out.
Responding to reports with high accuracy absolutely does require a large amount of resources. Consider the most common system that actually tries to do that -- and even then still often gets it wrong -- is the court system. You can't even get within two orders of magnitude of that level of resources per report while still calling it a feasible amount of resources to foist onto a private party as an unfunded mandate.
> Because that's (quite obviously) not the goal.
If not propping up megacorps is a goal -- and it should be -- then encouraging smaller services is a rewording of that goal. If you exempt smaller services and that causes smaller services to take over, the result is that the services that take over are exempt. And when that's going to be the result then you can remove the unnecessary indirection.
> To date smaller venues have very good track records in my personal experience.
One of the ways they do this is that smaller services generally have a niche, and then depending on what that niche is, they can avoid a lot of this trouble because the nature of their audience doesn't attract it.
This site is a good example. Discussion of highly contentious debates is heavily suppressed, the site doesn't support posting images or videos and the audience is such that only a specific set of topics will get any traction.
Which is fine if that's what you're looking for, and there is a place for that, but services with a different focus will attract different elements and then have more of a problem. And saying "well just don't host any of that" is the false positives problem. Should there be nowhere that can host contentious political debates or where adults can express their sexuality?
The large sites have these problems because they're general purpose and thereby attract and include all kinds of things. If you split things into special-purpose sites while still expecting the sum of them to provide full coverage then some of them can avoid the problems by limiting their scope, but then the other ones have to do it and you've only moved the problem to a different place instead of actually solving it.