I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.
Unfortunately, these extremely contradictory subjective images of HN seem to be a consequence of its structure, being non-siloed: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... This creates a paradox where precisely because the site is less divisive it feels more divisive—in the sense that it feels to people like it is dominated by their enemies, whoever their enemies may be. That's extremely bad for community, and I don't know what to do about it, other than post a version of this comment every time it comes up.
Thanks for caring about level-headeness, in any case.
I mean, I agree with you that we all have biases and blind spots in our perception. Which means... so do the mods. I comment because I want HN to continue to be a site that people like me want to comment on. The site that "people whose comments dang likes" want to comment on surely looks different.
But I think your explanation of why this is is much too simplistic. The difference seems to be that you aren't being bombarded every day with utterly contradictory extremely strong feelings about how awful it is. If you were, you wouldn't be able to write what you just posted. Your judgment that the perception "isn't symmetric" is wildly out of line with what I encounter here, so one of us must be dealing with an extremely skewed sample. Perhaps you read more HN posts and talk to a wider variety of people about HN than I do. From my perspective, the links below are typical—and there are countless more where these came from. Of course, there are also countless links claiming exactly the opposite, but since you already believe that, they aren't the medicine in this case. I sample that list when responding to commenters who see things this way:
https://news.ycombinator.com/item?id=23729568
https://news.ycombinator.com/item?id=17197581
https://news.ycombinator.com/item?id=23429442
https://news.ycombinator.com/item?id=20438487
https://news.ycombinator.com/item?id=15032682
https://news.ycombinator.com/item?id=19471335
https://news.ycombinator.com/item?id=15937781
https://news.ycombinator.com/item?id=21627676
https://news.ycombinator.com/item?id=15388778
https://news.ycombinator.com/item?id=20956287
https://news.ycombinator.com/item?id=15585780
https://news.ycombinator.com/user?id=BetterThanSlave
https://news.ycombinator.com/user?id=slamdance
https://news.ycombinator.com/item?id=15307915
A sample email, for a change of pace: "It's clear ycombinator is clearly culling right-wing opinions and thoughts. The only opinions allowed to remain on the site are left wing [...] What a fucking joke your site has become."
https://news.ycombinator.com/item?id=20202305
https://news.ycombinator.com/item?id=18664482
https://news.ycombinator.com/item?id=16397133
https://news.ycombinator.com/item?id=15546533
https://news.ycombinator.com/item?id=15752730
https://news.ycombinator.com/item?id=20645202
I think there is contention right now because moderator decisions are opaque so people come up with their own narratives. Without actual data there is no way to tell what type of bias exists and why so it's easy to make up a personal narrative that is not backed with any actual data.
User flagging is also currently opaque and a similar argument applies. If I have to provide a reason for why I flagged something and will know that my name will be publicly associated with which items I've flagged then I will be much more careful. Right now, flagging anything is consequence free because it is opaque.
There are 2 mods running HN. Responding to people is TAXING - as in its hugely costly. And it has some terrible edge cases which destroy the process:
The costly occasions are when you meet people who are either
a) Angry
b) Rule lawyers
c) malignantly motivated
AT this point their goal is to get attention or apply coercive force on the moderation process.
These guys are an existential threat to the conversational process and one of the win conditions is to get people to turn against the moderators.
Social media is a topic that HN gets wrong so regularly, and without recourse to research or analysis so frequently that I would avoid discussing moderation in general here.
The fact is that if people are arguing in good faith, we can have some amount of peace, and even deal with inadvertent faux pas and ignorance, provided you never reach an eternal september scenario.
But bad faith actors make even this scenario impossible.
For people who have NEVER thought of social networks and conversations online I find this site to discuss some of the blander but more game theoretic elements of networks/trust and therefore online conversations:
-----------------
For you guys (HN Mods) I'd bet that you in particular are abreast of stuff.
- I'd ask if you have heard/seen Civil Servant, by Nathan Matias - its a system to do experiments on forums and test the results (see if there is a measurable change on user behavior)
https://natematias.com/ - Civil Servant, Professor Cornell. He probably has an account here
https://civilservant.io/moderation_experiment_r_science_rule...
- Books: Custodians of the internet.
------
Going through some of the papers I have stocked away, sadly in no sane order. I can't say if they are classic papers, you may have better.
- Policy/law Paper: Georgetown law, Regulating Online Content Moderation. https://www.law.georgetown.edu/georgetown-law-journal/wp-con...
- NBER paper on polarization - https://www.nber.org/papers/w23258, I disagreed/was surprised by the conclusion. America centric.
- Homophily and minority-group size explain perception biases in social networks, https://www.nature.com/articles/s41562-019-0677-4
- The spreading of misinformation online: https://www.pnas.org/content/113/3/554.full
- The Uni of Alabama has a reddit research group, - https://arrg.ua.edu/research.html, they have 2 papers. One of which explores the effect of a sudden influx of new users on r/2xchromosomes. https://firstmonday.org/ojs/index.php/fm/article/view/10143/...
-policy: OFCOM (UK) has a policy paper on using AI for moderation https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/...
- Algorithmic content moderation: Technical and political challenges in the automation of platform governance - https://journals.sagepub.com/doi/10.1177/2053951719897945
- The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources
- Community Interaction and Conflict on the Web,
- You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech
Papers I have to read myself,
- Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit. https://shagunjhaver.com/files/research/jhaver-2019-transpar...
- Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms: https://journals.sagepub.com/doi/abs/10.1177/146144481877305... (I need to read that paper, but I expect it to be a good foundation of knowledge and examples)
Other stuff:
- The turing institute talked about Moderators being key workers during COVID - https://www.turing.ac.uk/blog/why-content-moderators-should-...