←back to thread

600 points codetrotter | 2 comments | | HN request time: 0.631s | source
Show context
subsubzero ◴[] No.35461974[source]
Congrats Dang, you have done a wonderful job so far and moderate one of the most fantastic online communities out there. I am sure most of the job feels somewhat thankless but I want to let you know I(and many many other users on this site) appreciate your hard work and dedication.
replies(3): >>35462601 #>>35462773 #>>35463700 #
codeddesign ◴[] No.35462773[source]
If by “finest” you mean a Reddit mob mentality for tech, then yes I completely agree with this statement.
replies(6): >>35462836 #>>35463131 #>>35463193 #>>35463875 #>>35464427 #>>35464999 #
dang ◴[] No.35463131[source]
What do you think we could do differently? Serious question.

I don't like the mob thing either but it's how large group dynamics on the internet work (by default). We try to mitigate it where we can but there's not a lot of knowledge about how to do that.

replies(24): >>35463179 #>>35463213 #>>35463257 #>>35463371 #>>35463548 #>>35463713 #>>35463749 #>>35464099 #>>35464410 #>>35464467 #>>35464570 #>>35464688 #>>35464754 #>>35465446 #>>35465523 #>>35465648 #>>35465794 #>>35466615 #>>35466946 #>>35467134 #>>35468675 #>>35469283 #>>35476621 #>>35488228 #
eru ◴[] No.35463548[source]
Have you ran some experiments with giving different people different front pages?

To explain a bit more:

On the one hand, you need a critical mass of people to have a discussion. On the other hand, large group dynamics seem to be a problem.

HN is generally many multiples larger than the critical mass for the former, at least on the front page. Attention drops off a lot if you go further.

So as an experiment, you could do something like 'rendezvous hashing' to show each user a random 10% subset of submissions. (If you want to gradually introduce it, run the experiment on 20% of the users only, but show them a 50% subset, so that each of the longer tail items still gets 10% of total users? You can play with the numbers.)

You could make this opt-out, too, so that people don't create ten accounts in the hope of seeing everything. Direct links would also still work.

replies(1): >>35463922 #
dang ◴[] No.35463922[source]
We ran an experiment something like that a few years ago, but mostly people got pissed off that they were seeing random stuff on the front page.

What's not clear to me in your comment is what we would be testing for. If you're going to A/B test different front pages, what's the fitness function?

replies(1): >>35464026 #
eru ◴[] No.35464026[source]
Thanks for already having run the experiment!

In vague terms, the idea is to suffer less from 'how large group dynamics on the internet work (by default)' while still having enough eyeballs per front-page submission to have a discussion.

Now how would we operationalise that? A simple measurement is to check whether engagement per submission is having a longer, fatter tail. But that would be merely something that's easy to measure, not something we directly care about.

You'd need to have some proxies for drawbacks of 'large group dynamics on the internet'. Perhaps check civility of discussion or so?

> [...] but mostly people got pissed off that they were seeing random stuff on the front page.

I guess if you'd want to check again, you'd either have to educate people better (ie better PR) or you'd have to be more sneaky.

An idea for the latter: instead of restricting users to 10% of submissions, as a test run you can reduce them to 80% of submissions. That way the front-page would still look pretty similar to before and you wouldn't drive people too far into the long tail of submissions. Of course, any effect you could measure would also be weaker.

What did you measure (or hope to measure) when you ran this experiment a few years ago?

replies(1): >>35464105 #
dang ◴[] No.35464105[source]
We just did what Jerry Weinberg called "first order measurement" - looking at what was happening. It wasn't a borderline call; people hated it.

I wrote about this here: https://news.ycombinator.com/item?id=21868928 (Dec 2019)

replies(1): >>35464493 #
1. eslaught ◴[] No.35464493[source]
I read through your linked post, and I wonder if it would work better if the algorithm was something like this:

Take what is currently the two front pages (i.e., the current front page, and what you get to when you click "next" from the front page). Then randomize out of that set and show it to the user.

You could do that for any value N, perhaps even a fractional N (1.2, 1.5, etc.) to see how much of an impact it has.

Instead it sounds like you took /newest posts and randomly placed them on the front page. These may be completely or nearly completely unvetted, so it's not surprising to me people reacted to that. (Granted, this is with the benefit of hindsight and so on.)

Stepping back a bit, I'm not sure any of this will meaningfully change the "mob" dynamics of HN. But HN attention is so focused right now, I do think spreading that out might have an impact. Right now, posts tend to die off quickly and sometimes I wish discussions would live on a little longer than they do.

I definitely empathize with feeling that any change could make things dramatically worse.

replies(1): >>35465046 #
2. eru ◴[] No.35465046[source]
> Take what is currently the two front pages (i.e., the current front page, and what you get to when you click "next" from the front page). Then randomize out of that set and show it to the user.

Well, that's essentially identical to what I suggested if you specialise it to 50% of submissions visible per user.

But yes, I agree that this would be an interesting experiment.

However it's easy enough for us to suggest experiments; and much harder for dang and friends to run them.