I don't like the mob thing either but it's how large group dynamics on the internet work (by default). We try to mitigate it where we can but there's not a lot of knowledge about how to do that.
Are there people whose upvotes count for more than others? Or are these actively suppressed? Either way, it makes it hard to have important/robust conversations when the people seeing them gets suppressed
Re the second bit: there aren't any accounts whose upvotes count for more, but if accounts upvote too many bad* comments and/or get involved in voting rings, we sometimes make their votes not count anymore.
* By "bad" I mean bad relative to HN's intended purpose as defined here: https://news.ycombinator.com/newsguidelines.html. Relative to that, "bad" means snark, flamewar, ideological battle, etc. — all the things that zap intellectual curiosity.
In terms of moderator action: we might downweight ChatGPT topics (for oar against) if they seem repetitive rather than significant new information (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). But we don't downweight posts that are critical of YC companies—or rather, we do so less than we would downweight similar threads on other topics. See https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu....
Are you sure there aren't abuses from your portfolio companies managers/employees to flag negative stories? I imagine Sam, for example, knows exactly what he has to do to get ChatGPT criticism guided off the stage.
Edit: for example, do you know what happened with this story? https://news.ycombinator.com/item?id=35245626
This is a very interesting/important topic. This was a new topic. It was really hot in the first hour, and just got smashed off the front page.
Quite sure. That is, there may be managers/employers of $companies trying to flag things, but being a YC portfolio company doesn't make that any easier. And yes I'm sure that Sam can't do that. (I also know that he wouldn't try, but that's a separate point.)
Re the FAQ: it doesn't give a detailed explanation (we can't do that without publishing our code) but it summarizes the factors comprehensively. If you want to know more I need to see a specific link. Speaking of which:
Re https://news.ycombinator.com/item?id=35245626: it was on HN's front page for 4 hours, and at some point was downweighted by a mod. I haven't checked about why, but I think most likely it was just our general approach of downweighting opinion pieces on popular topics. Keep in mind that the LLM tsunami is an insanely popular topic—by far the biggest in years—and if we weren't downweighting follow-ups a la https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que..., it would saturate the front page every day.
Actually we tend to not do that moderation on randomwalker posts (https://news.ycombinator.com/user?id=randomwalker) - because they're basically always excellent. But a certain amount of randomness is inescapable and randomwalker posts do great on HN most lot of the time. If we made the wrong call in this case, so much the worse for us and I'm genuinely sorry.
Precisely why would publishing (the relevant part of) the code be a problem? Twitter did it just a few days ago, and they aren't even known as an information hub of the open source world, plus they face a lot more public scrutiny for everything they do, to put it mildly.
Either way, though, I don't want to publish that part of our code for two reasons: I fear that it would make HN easier to game/mainpulate, and I fear that it would increase the number of objections we have to deal with. It's not that I mind dealing with objections in principle, but a 10x increase would bury me.