I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
Unfortunately, these extremely contradictory subjective images of HN seem to be a consequence of its structure, being non-siloed: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... This creates a paradox where precisely because the site is less divisive it feels more divisive—in the sense that it feels to people like it is dominated by their enemies, whoever their enemies may be. That's extremely bad for community, and I don't know what to do about it, other than post a version of this comment every time it comes up.
Thanks for caring about level-headeness, in any case.
I mean, I agree with you that we all have biases and blind spots in our perception. Which means... so do the mods. I comment because I want HN to continue to be a site that people like me want to comment on. The site that "people whose comments dang likes" want to comment on surely looks different.
But I think your explanation of why this is is much too simplistic. The difference seems to be that you aren't being bombarded every day with utterly contradictory extremely strong feelings about how awful it is. If you were, you wouldn't be able to write what you just posted. Your judgment that the perception "isn't symmetric" is wildly out of line with what I encounter here, so one of us must be dealing with an extremely skewed sample. Perhaps you read more HN posts and talk to a wider variety of people about HN than I do. From my perspective, the links below are typical—and there are countless more where these came from. Of course, there are also countless links claiming exactly the opposite, but since you already believe that, they aren't the medicine in this case. I sample that list when responding to commenters who see things this way:
https://news.ycombinator.com/item?id=23729568
https://news.ycombinator.com/item?id=17197581
https://news.ycombinator.com/item?id=23429442
https://news.ycombinator.com/item?id=20438487
https://news.ycombinator.com/item?id=15032682
https://news.ycombinator.com/item?id=19471335
https://news.ycombinator.com/item?id=15937781
https://news.ycombinator.com/item?id=21627676
https://news.ycombinator.com/item?id=15388778
https://news.ycombinator.com/item?id=20956287
https://news.ycombinator.com/item?id=15585780
https://news.ycombinator.com/user?id=BetterThanSlave
https://news.ycombinator.com/user?id=slamdance
https://news.ycombinator.com/item?id=15307915
A sample email, for a change of pace: "It's clear ycombinator is clearly culling right-wing opinions and thoughts. The only opinions allowed to remain on the site are left wing [...] What a fucking joke your site has become."
https://news.ycombinator.com/item?id=20202305
https://news.ycombinator.com/item?id=18664482
https://news.ycombinator.com/item?id=16397133
https://news.ycombinator.com/item?id=15546533
https://news.ycombinator.com/item?id=15752730
https://news.ycombinator.com/item?id=20645202
I think there is contention right now because moderator decisions are opaque so people come up with their own narratives. Without actual data there is no way to tell what type of bias exists and why so it's easy to make up a personal narrative that is not backed with any actual data.
User flagging is also currently opaque and a similar argument applies. If I have to provide a reason for why I flagged something and will know that my name will be publicly associated with which items I've flagged then I will be much more careful. Right now, flagging anything is consequence free because it is opaque.
Making this mistake would lead to more argument, not less—the opposite of what was intended. It would simply reproduce the same old arguments at a meta level, giving the fire a whole new dimension of fuel to burn. Worse, it would skew more of HN into flamewars and meta fixation on the site itself, which are the two biggest counterfactors to its intended use.
Such lists would be most attractive to the litigious and bureaucratic sort of user, the kind that produces two or more new objections to every answer you give [1]. That's a kind of DoS attack on moderation resources. Since there are always more of them than of us, it's a thundering herd problem too.
This would turn moderation into even more of a double bind [2] and might even make it impossible, since we function on the edge of the impossible already. Worst of all, it would starve HN of time and energy for making the site better—something that unfortunately is happening already. This is a well-known hard problem with systems like this: a minority of the community consumes a majority of the resources. Really we should be spending those making the site better for its intended use by the majority of its users.
So forgive me, but I think publishing a full moderation log would be a mistake. I'll probably be having nightmares about it tonight.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Any complaint without data to back it up would be thrown in the trash pile.
In any case. It's a worthwhile experiment to try because it can't make your life worse. I can't really imagine anything worse than being compared to Hitler and Stalin especially if all that person is doing is just venting their anger. I'd want to avoid being the target of that anger and I would require mathematical analysis from anyone that claimed to be justifiably angry to show the actual justification for their anger. Without data you will continue to get hate mail that's nothing more than people making up a story to justify their own anger. And you have already noticed the personal narrative angle so I'm not telling you anything new here. The data takes away the "personal" part of the narrative which I think is an improvement.
There's a deeper issue though. Such an analysis would depend on labeling the data accurately in the first place, and opposing sides would never agree on how to label it. Indeed, they would adjust the labels until the analysis produced what they already 'know' to be the right answer—not because of conscious fraud but simply because the situation seems so obvious to them to begin with. As I said above, the only people motivated enough to work on this would be ones who would never accept any result that didn't reproduce what they already know, or feel they know.
I'm curious if given all that you've shared, you think it's even _possible_ to scale a "healthy" discussion site any larger than HN currently is? It's clear that HN's success is in no small part due to the commitment, passion, and active participation of the few moderators. Contrast that with some of the top comments, which describe how toxic Twitter is, and I wonder if there's some sort of limit to effective moderation, or if we just haven't found more scalable solutions to manage millions of humans talking openly online sans toxicity? cheers
NateEag makes some good points in the sibling comment. You'd have to create the culture at the level of the moderation team, and that's not easy. The way we approach this work on HN has aspects that reach deep into personal life, in a way that I would not feel comfortable requiring of anybody—nor would it work anyhow. If you tried to build such an organization using any standard corporate approach it would likely be a disaster. But maybe it could be done in a different way, or maybe there is an approach that doesn't resemble how we do it on HN.
Would it be possible with the economics of a startup, where the priority has to be growth and/or monetization? Probably less.