←back to thread

270 points ilamont | 1 comments | | HN request time: 0.237s | source
Show context
harrisonjackson ◴[] No.21974392[source]
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.

Also, the ability to whitelist an author or book for extra moderation seems like a no-brainer. After there is evidence of harassment then all user content needs to be approved before it is made public. Enable trusted moderators from the community to help with this if paid moderators cannot keep up.

This seems like it could get so so much worse than it currently is. The target of harassment seems to be taking it well but what happens on a platform like this to someone that isn't as prepared to deal with it?

replies(3): >>21975018 #>>21977718 #>>21979280 #
crazygringo ◴[] No.21975018[source]
I think there's a big difference between moderation according to standards, and false reviews.

It's comparatively easy to determine if any single post is using banned language, is abusive, etc. The single post can then be removed.

False reviews, on the other hand, are virtually impossible to identify individually. People's opinions on a book legitimately differ. There isn't an obvious way to distinguish between a review that's part of a harrassment campaign or paid brigade, versus one that's genuine. It's only in aggregate that something seems to be wrong -- but how do you fix it? How do you select which individual reviews get removed?

Moderation is not really a solution here, because all individual reviews will be approved.

replies(1): >>21975293 #
harrisonjackson ◴[] No.21975293[source]
One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.

It is definitely a difficult problem - I'll agree with you there. There are some other good suggestions in the thread on making it easier to flag the false reviews/moderate reviews beyond "community standards"

I like the idea of using a captcha that prompts you to enter a random word from a random chapter in the book.

Another system could just hide reviews that are not verified - and tie into amazon purchases to verify them - I don't know why Amazon would not lean on the fact that they own Goodreads to do this... Make all the reviews visible if the user prompts to see the unverified ones, but as the default just shows the reviews for people that bought the book through Amazon.

replies(3): >>21975435 #>>21976769 #>>21979336 #
crazygringo ◴[] No.21975435[source]
Yes, I agree there a few things that could be done to improve, but they all basically involve giving semi-subjective 'weights' as to the reliability of individual reviewers.

E.g. more likely to be genuine if purchased, if not prepublication (except some people really do receive and review books in advance), if has many reviews, if reviews follow common statistical patterns both per-author and per-book, and so on.

The trouble with all of this is just that it's really, really hard to get right. There's a tremendous amount of 'tuning' involved.

It's probably not possible, but it really would be great if someone could come up with some general elegant theory to solve particularly the 'does this reviewer seem statistically trustworthy', in a way that effectively identifies brigading and harrassment, while still allowing for genuine 'oddballs' whose reviews and ratings go against the crowd.

replies(2): >>21976492 #>>21976623 #
1. strgcmc ◴[] No.21976623[source]
I swear I'm not just meme-ing for the sake of it, but this has always seemed like fundamentally a decentralized trust problem, and one that can potentially be solved by some form of social blockchain.

Basically, instead of an economic currency unit being mined, the value being protected is instead some form of reputational trust-token; 30 seconds of Googling leads to articles like this: https://www.forbes.com/sites/shermanlee/2018/08/13/a-decentr...

Thinking about things this way essentially boils the fundamental problem into what IMO is a pretty "general elegant theory", which is simply to construct a properly balanced incentive structure, which asymmetrically disincentivizes "bad" behavior while encouraging "good" behavior, in much the same way that Bitcoin's core ledger-validation/mining abstraction rewards miners for securing the network while also discouraging prohibitively expensive attack scenarios.

I'm not saying it's easy or obvious, but I think this is exactly the sort of decentralized trust problem that blockchains are well-suited for.