←back to thread

482 points ilamont | 2 comments | | HN request time: 0.001s | source
Show context
_bxg1 ◴[] No.23807033[source]
I honestly think the only solution is for individuals to recuse themselves from those networks (I say on one of those networks), lower the trust they place in digital information, etc. It's become clear that the downward spiral is intrinsic to the medium itself (or possibly just the scale). I don't believe that any amount of technology, or product-rethinking, or UX will change that. We just weren't meant to interact this way. My only hope is that people eventually get disenchanted or burned-out enough that they simply stop engaging.

I replied to the original tweet too ("what would you do if you were Jack Dorsey?"). I said I'd shut the whole thing down.

replies(3): >>23807161 #>>23813894 #>>23815451 #
asah ◴[] No.23807161[source]
Sadly, the level headed people recuse themselves which only adds to the toxicity.
replies(1): >>23807309 #
newacct583 ◴[] No.23807309[source]
Actually what happens is the level headed people on one side of an issue divide recuse themselves, leaving a "seemingly level-headed consensus echo chamber" behind. IMHO, that's worse. This account exists largely to counter exactly that trend. It's important (to me) that newcomers to the site don't get the idea that "hackers" are all fringe libertarians on every non-technical subject.
replies(3): >>23807375 #>>23808300 #>>23810001 #
dang ◴[] No.23807375[source]
This site may feel like a "consensus echo chamber" but in reality it is nothing remotely close to that. I think you may be running into the notice-dislike bias: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... Since you report noticing fringe libertarians, we can be sure that you dislike fringe libertarianism. We can also be sure that they have just the opposite picture of HN, since everyone crafts their picture in the image of what they dislike, without realizing that they're doing that. It just feels like an objective picture. I can list dozens of examples of this, but I'll restrain myself for once and spare you.

Unfortunately, these extremely contradictory subjective images of HN seem to be a consequence of its structure, being non-siloed: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que.... This creates a paradox where precisely because the site is less divisive it feels more divisive—in the sense that it feels to people like it is dominated by their enemies, whoever their enemies may be. That's extremely bad for community, and I don't know what to do about it, other than post a version of this comment every time it comes up.

Thanks for caring about level-headeness, in any case.

replies(5): >>23807422 #>>23807590 #>>23808045 #>>23808650 #>>23809548 #
newacct583 ◴[] No.23807422[source]
The sensitivity with which you replied just tells me I'm probably right about this. I assure you that, having dealt with HN readers in real life contexts on both sides of that divide, that the perception absolutely isn't symmetric. HN is seen as a "safe space" for some demographics and definitely as hostile by others. (Edit: I'll just say it. I've had multiple conversations with real life women where I have to make excuses for the perspective of posters here and explain why it's still a valuable forum anyway.)

I mean, I agree with you that we all have biases and blind spots in our perception. Which means... so do the mods. I comment because I want HN to continue to be a site that people like me want to comment on. The site that "people whose comments dang likes" want to comment on surely looks different.

replies(1): >>23807579 #
dang ◴[] No.23807579[source]
I totally empathize with what it's like to try to defend HN as a worthy place to participate when you're talking with someone who has extremely strong feelings about how awful it is, and the fact that you're willing to do that makes me feel much more sympathy and common ground with you than any disagreement we may have on other points.

But I think your explanation of why this is is much too simplistic. The difference seems to be that you aren't being bombarded every day with utterly contradictory extremely strong feelings about how awful it is. If you were, you wouldn't be able to write what you just posted. Your judgment that the perception "isn't symmetric" is wildly out of line with what I encounter here, so one of us must be dealing with an extremely skewed sample. Perhaps you read more HN posts and talk to a wider variety of people about HN than I do. From my perspective, the links below are typical—and there are countless more where these came from. Of course, there are also countless links claiming exactly the opposite, but since you already believe that, they aren't the medicine in this case. I sample that list when responding to commenters who see things this way:

https://news.ycombinator.com/item?id=23729568

https://news.ycombinator.com/item?id=17197581

https://news.ycombinator.com/item?id=23429442

https://news.ycombinator.com/item?id=20438487

https://news.ycombinator.com/item?id=15032682

https://news.ycombinator.com/item?id=19471335

https://news.ycombinator.com/item?id=15937781

https://news.ycombinator.com/item?id=21627676

https://news.ycombinator.com/item?id=15388778

https://news.ycombinator.com/item?id=20956287

https://news.ycombinator.com/item?id=15585780

https://news.ycombinator.com/user?id=BetterThanSlave

https://news.ycombinator.com/user?id=slamdance

https://news.ycombinator.com/item?id=15307915

A sample email, for a change of pace: "It's clear ycombinator is clearly culling right-wing opinions and thoughts. The only opinions allowed to remain on the site are left wing [...] What a fucking joke your site has become."

https://news.ycombinator.com/item?id=20202305

https://news.ycombinator.com/item?id=18664482

https://news.ycombinator.com/item?id=16397133

https://news.ycombinator.com/item?id=15546533

https://news.ycombinator.com/item?id=15752730

https://news.ycombinator.com/item?id=20645202

https://news.ycombinator.com/item?id=21325122

https://news.ycombinator.com/item?id=23719343

replies(6): >>23807853 #>>23807863 #>>23807944 #>>23808060 #>>23808168 #>>23809626 #
memexy ◴[] No.23807944[source]
I think there is a solution to this problem. If moderator decisions are made and recorded publicly then the data can at least be analyzed objectively. If there is indeed a bias then someone should be able to sit down and do the statistical analysis and show that "Yes, X type of stories / comments are more consistently flagged / removed / downvoted / etc." or "No, there is actually no bias in this instance".

I think there is contention right now because moderator decisions are opaque so people come up with their own narratives. Without actual data there is no way to tell what type of bias exists and why so it's easy to make up a personal narrative that is not backed with any actual data.

User flagging is also currently opaque and a similar argument applies. If I have to provide a reason for why I flagged something and will know that my name will be publicly associated with which items I've flagged then I will be much more careful. Right now, flagging anything is consequence free because it is opaque.

replies(3): >>23808079 #>>23808089 #>>23808559 #
intended ◴[] No.23808559[source]
Absolutely not.

There are 2 mods running HN. Responding to people is TAXING - as in its hugely costly. And it has some terrible edge cases which destroy the process:

The costly occasions are when you meet people who are either

a) Angry

b) Rule lawyers

c) malignantly motivated

AT this point their goal is to get attention or apply coercive force on the moderation process.

These guys are an existential threat to the conversational process and one of the win conditions is to get people to turn against the moderators.

Social media is a topic that HN gets wrong so regularly, and without recourse to research or analysis so frequently that I would avoid discussing moderation in general here.

The fact is that if people are arguing in good faith, we can have some amount of peace, and even deal with inadvertent faux pas and ignorance, provided you never reach an eternal september scenario.

But bad faith actors make even this scenario impossible.

replies(1): >>23808613 #
dang ◴[] No.23808613[source]
If you know of research or analysis that is essential on this topic, please tell us what it is. I'd like to be sure I'm aware of it, and other readers would surely be interested also.
replies(2): >>23809030 #>>23809033 #
intended ◴[] No.23809030[source]
Hmm. Given the broad range of topics "social Media" covers, there are vast numbers of papers on it.

For people who have NEVER thought of social networks and conversations online I find this site to discuss some of the blander but more game theoretic elements of networks/trust and therefore online conversations:

https://ncase.me/crowds/

https://ncase.me/trust/

-----------------

For you guys (HN Mods) I'd bet that you in particular are abreast of stuff.

- I'd ask if you have heard/seen Civil Servant, by Nathan Matias - its a system to do experiments on forums and test the results (see if there is a measurable change on user behavior)

https://natematias.com/ - Civil Servant, Professor Cornell. He probably has an account here

https://civilservant.io/moderation_experiment_r_science_rule...

- Books: Custodians of the internet.

------

Going through some of the papers I have stocked away, sadly in no sane order. I can't say if they are classic papers, you may have better.

- Policy/law Paper: Georgetown law, Regulating Online Content Moderation. https://www.law.georgetown.edu/georgetown-law-journal/wp-con...

- NBER paper on polarization - https://www.nber.org/papers/w23258, I disagreed/was surprised by the conclusion. America centric.

- Homophily and minority-group size explain perception biases in social networks, https://www.nature.com/articles/s41562-019-0677-4

- The spreading of misinformation online: https://www.pnas.org/content/113/3/554.full

- The Uni of Alabama has a reddit research group, - https://arrg.ua.edu/research.html, they have 2 papers. One of which explores the effect of a sudden influx of new users on r/2xchromosomes. https://firstmonday.org/ojs/index.php/fm/article/view/10143/...

-policy: OFCOM (UK) has a policy paper on using AI for moderation https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/...

- Algorithmic content moderation: Technical and political challenges in the automation of platform governance - https://journals.sagepub.com/doi/10.1177/2053951719897945

- The Web Centipede: Understanding How Web Communities Influence Each Other Through the Lens of Mainstream and Alternative News Sources

- Community Interaction and Conflict on the Web,

- You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech

Papers I have to read myself,

- Does Transparency in Moderation Really Matter?: User Behavior After Content Removal Explanations on Reddit. https://shagunjhaver.com/files/research/jhaver-2019-transpar...

- Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms: https://journals.sagepub.com/doi/abs/10.1177/146144481877305... (I need to read that paper, but I expect it to be a good foundation of knowledge and examples)

Other stuff:

- The turing institute talked about Moderators being key workers during COVID - https://www.turing.ac.uk/blog/why-content-moderators-should-...

replies(1): >>23809962 #
1. holler ◴[] No.23809962[source]
Hey really big thank you for posting this! I'm working on a new discussion site and so much of this is pertinent. I may return with some follow-up commentary once I've read through some of it, thank you.
replies(1): >>23815327 #
2. intended ◴[] No.23815327[source]
NP. If you find any interesting papers, do share.