I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
Unfortunately, these extremely contradictory subjective images of HN seem to be a consequence of its structure, being non-siloed: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... This creates a paradox where precisely because the site is less divisive it feels more divisive—in the sense that it feels to people like it is dominated by their enemies, whoever their enemies may be. That's extremely bad for community, and I don't know what to do about it, other than post a version of this comment every time it comes up.
Thanks for caring about level-headeness, in any case.
I mean, I agree with you that we all have biases and blind spots in our perception. Which means... so do the mods. I comment because I want HN to continue to be a site that people like me want to comment on. The site that "people whose comments dang likes" want to comment on surely looks different.
But I think your explanation of why this is is much too simplistic. The difference seems to be that you aren't being bombarded every day with utterly contradictory extremely strong feelings about how awful it is. If you were, you wouldn't be able to write what you just posted. Your judgment that the perception "isn't symmetric" is wildly out of line with what I encounter here, so one of us must be dealing with an extremely skewed sample. Perhaps you read more HN posts and talk to a wider variety of people about HN than I do. From my perspective, the links below are typical—and there are countless more where these came from. Of course, there are also countless links claiming exactly the opposite, but since you already believe that, they aren't the medicine in this case. I sample that list when responding to commenters who see things this way:
https://news.ycombinator.com/item?id=23729568
https://news.ycombinator.com/item?id=17197581
https://news.ycombinator.com/item?id=23429442
https://news.ycombinator.com/item?id=20438487
https://news.ycombinator.com/item?id=15032682
https://news.ycombinator.com/item?id=19471335
https://news.ycombinator.com/item?id=15937781
https://news.ycombinator.com/item?id=21627676
https://news.ycombinator.com/item?id=15388778
https://news.ycombinator.com/item?id=20956287
https://news.ycombinator.com/item?id=15585780
https://news.ycombinator.com/user?id=BetterThanSlave
https://news.ycombinator.com/user?id=slamdance
https://news.ycombinator.com/item?id=15307915
A sample email, for a change of pace: "It's clear ycombinator is clearly culling right-wing opinions and thoughts. The only opinions allowed to remain on the site are left wing [...] What a fucking joke your site has become."
https://news.ycombinator.com/item?id=20202305
https://news.ycombinator.com/item?id=18664482
https://news.ycombinator.com/item?id=16397133
https://news.ycombinator.com/item?id=15546533
https://news.ycombinator.com/item?id=15752730
https://news.ycombinator.com/item?id=20645202
I think there is contention right now because moderator decisions are opaque so people come up with their own narratives. Without actual data there is no way to tell what type of bias exists and why so it's easy to make up a personal narrative that is not backed with any actual data.
User flagging is also currently opaque and a similar argument applies. If I have to provide a reason for why I flagged something and will know that my name will be publicly associated with which items I've flagged then I will be much more careful. Right now, flagging anything is consequence free because it is opaque.
I also don't think that it's possible to have any forum without bias so the data I'm certain will indicate bias but at least it will be transparent and obvious so people can point to actual data to make their case one way or the other. It's hard to improve a situation if there is no data to point to and argue about. Without data people just tell stories about whatever makes the most sense from whatever sparse data they have managed to reverse engineer from personal observations.
Making this mistake would lead to more argument, not less—the opposite of what was intended. It would simply reproduce the same old arguments at a meta level, giving the fire a whole new dimension of fuel to burn. Worse, it would skew more of HN into flamewars and meta fixation on the site itself, which are the two biggest counterfactors to its intended use.
Such lists would be most attractive to the litigious and bureaucratic sort of user, the kind that produces two or more new objections to every answer you give [1]. That's a kind of DoS attack on moderation resources. Since there are always more of them than of us, it's a thundering herd problem too.
This would turn moderation into even more of a double bind [2] and might even make it impossible, since we function on the edge of the impossible already. Worst of all, it would starve HN of time and energy for making the site better—something that unfortunately is happening already. This is a well-known hard problem with systems like this: a minority of the community consumes a majority of the resources. Really we should be spending those making the site better for its intended use by the majority of its users.
So forgive me, but I think publishing a full moderation log would be a mistake. I'll probably be having nightmares about it tonight.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Any complaint without data to back it up would be thrown in the trash pile.
In any case. It's a worthwhile experiment to try because it can't make your life worse. I can't really imagine anything worse than being compared to Hitler and Stalin especially if all that person is doing is just venting their anger. I'd want to avoid being the target of that anger and I would require mathematical analysis from anyone that claimed to be justifiably angry to show the actual justification for their anger. Without data you will continue to get hate mail that's nothing more than people making up a story to justify their own anger. And you have already noticed the personal narrative angle so I'm not telling you anything new here. The data takes away the "personal" part of the narrative which I think is an improvement.
There are 2 mods running HN. Responding to people is TAXING - as in its hugely costly. And it has some terrible edge cases which destroy the process:
The costly occasions are when you meet people who are either
a) Angry
b) Rule lawyers
c) malignantly motivated
AT this point their goal is to get attention or apply coercive force on the moderation process.
These guys are an existential threat to the conversational process and one of the win conditions is to get people to turn against the moderators.
Social media is a topic that HN gets wrong so regularly, and without recourse to research or analysis so frequently that I would avoid discussing moderation in general here.
The fact is that if people are arguing in good faith, we can have some amount of peace, and even deal with inadvertent faux pas and ignorance, provided you never reach an eternal september scenario.
But bad faith actors make even this scenario impossible.
There's a deeper issue though. Such an analysis would depend on labeling the data accurately in the first place, and opposing sides would never agree on how to label it. Indeed, they would adjust the labels until the analysis produced what they already 'know' to be the right answer—not because of conscious fraud but simply because the situation seems so obvious to them to begin with. As I said above, the only people motivated enough to work on this would be ones who would never accept any result that didn't reproduce what they already know, or feel they know.
For people who have NEVER thought of social networks and conversations online I find this site to discuss some of the blander but more game theoretic elements of networks/trust and therefore online conversations:
-----------------
For you guys (HN Mods) I'd bet that you in particular are abreast of stuff.
- I'd ask if you have heard/seen Civil Servant, by Nathan Matias - its a system to do experiments on forums and test the results (see if there is a measurable change on user behavior)
https://natematias.com/ - Civil Servant, Professor Cornell. He probably has an account here
https://civilservant.io/moderation_experiment_r_science_rule...
- Books: Custodians of the internet.
------
Going through some of the papers I have stocked away, sadly in no sane order. I can't say if they are classic papers, you may have better.
- Policy/law Paper: Georgetown law, Regulating Online Content Moderation. https://www.law.georgetown.edu/georgetown-law-journal/wp-con...
- NBER paper on polarization - https://www.nber.org/papers/w23258, I disagreed/was surprised by the conclusion. America centric.
- Homophily and minority-group size explain perception biases in social networks, https://www.nature.com/articles/s41562-019-0677-4
- The spreading of misinformation online: https://www.pnas.org/content/113/3/554.full
- The Uni of Alabama has a reddit research group, - https://arrg.ua.edu/research.html, they have 2 papers. One of which explores the effect of a sudden influx of new users on r/2xchromosomes. https://firstmonday.org/ojs/index.php/fm/article/view/10143/...
-policy: OFCOM (UK) has a policy paper on using AI for moderation https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/...
- Algorithmic content moderation: Technical and political challenges in the automation of platform governance - https://journals.sagepub.com/doi/10.1177/2053951719897945
- The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources
- Community Interaction and Conflict on the Web,
- You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech
Papers I have to read myself,
- Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit. https://shagunjhaver.com/files/research/jhaver-2019-transpar...
- Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms: https://journals.sagepub.com/doi/abs/10.1177/146144481877305... (I need to read that paper, but I expect it to be a good foundation of knowledge and examples)
Other stuff:
- The turing institute talked about Moderators being key workers during COVID - https://www.turing.ac.uk/blog/why-content-moderators-should-...
I'm curious if given all that you've shared, you think it's even _possible_ to scale a "healthy" discussion site any larger than HN currently is? It's clear that HN's success is in no small part due to the commitment, passion, and active participation of the few moderators. Contrast that with some of the top comments, which describe how toxic Twitter is, and I wonder if there's some sort of limit to effective moderation, or if we just haven't found more scalable solutions to manage millions of humans talking openly online sans toxicity? cheers
Most sites its size are far, far worse, I think.
I personally believe that is due to human nature.
I think that is what dang has observed and is trying to articulate - no matter how smart or rigorous or mathematical you are, you still are human and thus subject to the human condition.
One way that manifests is this persuasion that the Other is winning the war (and that there is a war, for that matter).
I take it as almost axiomatic that a site with Twitter's volume cannot be anything but the cesspool it is.
It's too big for a single person to even begin to read a statistically-significant fraction of the content.
That means moderation is a hilariously-stupid concept at that scale. Any team of moderators large enough to do the job will itself suffer the fragmentation and conflicts that online forums do, and find itself unable to agree on what the policies should be, let alone how they should be adapted in contentious cases (and by definition, you only need moderation in contentious cases).
NateEag makes some good points in the sibling comment. You'd have to create the culture at the level of the moderation team, and that's not easy. The way we approach this work on HN has aspects that reach deep into personal life, in a way that I would not feel comfortable requiring of anybody—nor would it work anyhow. If you tried to build such an organization using any standard corporate approach it would likely be a disaster. But maybe it could be done in a different way, or maybe there is an approach that doesn't resemble how we do it on HN.
Would it be possible with the economics of a startup, where the priority has to be growth and/or monetization? Probably less.
For example, the human nature you're talking about is by far the strongest force on HN, and the scale (though tiny compared to Twitter or Facebook or Reddit) is already beyond what one would suppose possible for a forum like this.
I’ve found that clear vivid examples from people are crucial torch lights which can be shared around to give people a snap shot into what mods feel or witness. This then allows the conversation with non mods to progress faster, since this type of story telling is what people are best optimized to consume.
No.
We haven’t really found it before the internet (these problems are endemic to human/sentient nature.)
The internet only makes things industrialized.
There are things you can do, that reduce the number of friction points, thus making it possible to self govern
1) narrow topics/purpose - the closer to an objective science the better.
2) no politics, no religion - as far as possible.
3) topic should not be a static/ largely opinion oriented. More goal driven, with progress milestones easily discussed and queried (lose weight, get healthy, ask artists, learn photoshop.)
4) clear and shareable tests to weed out posers - r/badeconomics, askhistory
5) strong moderation.
6) no to little meta Discussions
7) directed paths for self promotion
8) get lucky and have a topic that attracts polite good faith debaters who can identify and eject bad faith actors (the holy grail.)
Each of these options removes or modulates a source of drama. With enough of them removed, you can still get flame wars, but it will be better than necker having done these before.
I would agree that HN is far too big for moderation alone to save it, though I hadn't quite put that together when I wrote my first post.
I think pg's original guidelines managed to capture enough of a cultural ideal that much of the original culture has been preserved organically by the users themselves (though I'm not qualified to speak to the culture of the early years, or how much it has changed since then).
You and the other mod(s?) have done a great job of being a guiding hand, and of understanding that it's too big for anything other than a loose guiding hand to be relevant, from a moderation perspective. You can remove things that shouldn't be discussed, show egregious repeat offenders the door, and encourage people to behave well and be restrained (in large part by example).
Twitter is so much vaster, and grew so fast, that even a guiding hand and good founding culture could not hope to save it. I suspect the way its design encourages rapid-fire back-and-forth also really hurts the nature of interaction on the site.
I wrote about this a bit here: https://news.ycombinator.com/item?id=23727261. Shirky's famous 2003 essay about internet communities was talking in terms of a few hundred people, and argued that groups can't scale beyond that. HN has scaled far beyond that, and though it is not a group in every sense of that essay, it has retained, let's say, some groupiness. It's not a war of all against all—or at least, not only that.
As we learn more about how to operate it, I'm hoping that we can do more things to encourage positive group dynamics. We shall see. The public tides are very much against it right now, but those can change.