←back to thread

414 points muchtest | 1 comments | | HN request time: 0.34s | source
Show context
nkurz ◴[] No.35929865[source]
Vouched for and upvoted because I think it's important for readers here to see how much effort goes into creating posts that game the system. I think it's better for these strategies to be known than hidden. It will be interesting to see how tactics like this one evolve as ChatGPT use becomes more widespread.

There's a definite tension between the rule of not accusing other users of being shills and the reality that there are quite a few shills out there. I think it a still good rule, but not because it's never right. Rather, the rule is good because the false accusations do more harm than letting some shilling slip by.

replies(7): >>35930145 #>>35930992 #>>35932488 #>>35933481 #>>35934251 #>>35934959 #>>35935998 #
ouid ◴[] No.35932488[source]
the rule against calling other people shills is the worst part of hackernews. Skepticism is important, and important to share. I have never been anything but grateful to read a comment pointing out that another comment was obviously a shill. Perhaps I have been embarrassed for not seeing the obvious truth, but always grateful.
replies(5): >>35932531 #>>35932559 #>>35932747 #>>35932851 #>>35934150 #
dang ◴[] No.35932851[source]
There's no such thing as "obviously a shill"—I can tell you from 10+ years of experience that the vast majority of such accusations crumble instantly on investigation. Commenters are far too quick to hurl them at other commenters.

There seems to be a cognitive bias where one's feeling of good faith decreases as the distance between someone else's opinion and one's own increases [1]. If so, then everyone has a "shill threshold": an amount of difference-of-opinion past which you will feel like the other person can't possibly be speaking honestly. When someone's posts exceed my shill threshold, I will feel that there must be some sinister reason why they're posting like that (they're a shill, they're an astroturfer, they're a foreign psy-op, you name it).

The important thing to realize is that this "shill threshold" is relative to the perceiver. It's the limit of your comfort zone, not an objective property of someone else's posts—no matter how objective the perception feels. It always feels objective—that's how we get phrases like "obviously a shill".

A forum like HN includes so many people, with such different views and backgrounds, that there is a constant stream of posts triggering somebody's "shill threshold" or other, purely because their views are sufficiently different. Thus the threads are guaranteed to fill up with accusations of abuse, even in the absence of any actual abuse.

[1] I bet it's nonlinear. Quadratic feels about right.

---

But real manipulation and abuse also objectively exist, so there are two distinct phenomena: there's Phenomenon A, the cognitive bias I just described, and then there's Phenomenon B: actual abuse, real shillage, astroturfing, etc. These are completely different from each other, despite how similar they feel. (The fact that they feel so similar is the cognitive bias.)

Phenomenon A generates overwhelmingly more comments than Phenomenon B—way more than 99%—and those comments are poison. They turn into flamewars, evoking worse from others (who feel unjustly accused and therefore within their rights to strike back even harder), and destroy everything we're trying for in the community.

What's the solution? We can't allow Phenomenon A (imaginary perceptions of abuse) to destroy HN, and we also can't allow Phenomenon B (actual abuse, perceived or not) to destroy HN.

Our solution is to forbid users to accuse each other in the threads (because we know that such accusations are usually false and poison the forum), but to welcome reports of possible abuse through a different channel (hn@ycombinator.com). This takes care of both Phenomenon A (you can't post like that here!) and Phenomenon B (we investigate such reports and crack down on real abuse when we find it).

To fight actual abuse (Phenomenon B), we need evidence—something objective to go on (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... ). It can't just be the feeling of "obviously a shill", which we know to be unreliable. And it can't just be people having vastly different views. Someone having a different opinion is not evidence of abuse, it's just evidence that the forum is big and diverse enough to include a wide range of opinions.

We need to find some trace of evidence in data that we can look at. Some data is public (e.g. comment histories), other data is not (e.g. voting histories and site access patterns). We have a lot of experience doing this and we're happy to look when people email us with their suspicions—partly because fighting abuse is one of our most important functions as site managers, and partly because we owe it to users in exchange for (hopefully) not slinging such accusations in the threads.

---

(There's also the question: what about real abuse that we can't find traces of in the data? Obviously there must be some of that and we don't know how much. I call this the Sufficiently Smart Manipulator problem. I've written about that in various places - e.g. https://news.ycombinator.com/item?id=27398725, and more via https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que..., if anybody wants it.)

replies(2): >>35935736 #>>35950459 #
youainti ◴[] No.35935736[source]
Thank you for that in depth reply. I learned something new by reading it. I guess I had never considered the community and norms aspect to reducing false positives in abuse detection.
replies(1): >>35936154 #
1. Paul-Craft ◴[] No.35936154[source]
Yeah, that was a really interesting comment. I think it would be kinda cool if dang or someone expanded it into more of a blog post on how HN is moderated, or maybe even best practices for community moderation in general.