Most active commenters
  • harrisonjackson(3)

←back to thread

270 points ilamont | 14 comments | | HN request time: 0.534s | source | bottom
1. harrisonjackson ◴[] No.21974392[source]
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.

Also, the ability to whitelist an author or book for extra moderation seems like a no-brainer. After there is evidence of harassment then all user content needs to be approved before it is made public. Enable trusted moderators from the community to help with this if paid moderators cannot keep up.

This seems like it could get so so much worse than it currently is. The target of harassment seems to be taking it well but what happens on a platform like this to someone that isn't as prepared to deal with it?

replies(3): >>21975018 #>>21977718 #>>21979280 #
2. crazygringo ◴[] No.21975018[source]
I think there's a big difference between moderation according to standards, and false reviews.

It's comparatively easy to determine if any single post is using banned language, is abusive, etc. The single post can then be removed.

False reviews, on the other hand, are virtually impossible to identify individually. People's opinions on a book legitimately differ. There isn't an obvious way to distinguish between a review that's part of a harrassment campaign or paid brigade, versus one that's genuine. It's only in aggregate that something seems to be wrong -- but how do you fix it? How do you select which individual reviews get removed?

Moderation is not really a solution here, because all individual reviews will be approved.

replies(1): >>21975293 #
3. harrisonjackson ◴[] No.21975293[source]
One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.

It is definitely a difficult problem - I'll agree with you there. There are some other good suggestions in the thread on making it easier to flag the false reviews/moderate reviews beyond "community standards"

I like the idea of using a captcha that prompts you to enter a random word from a random chapter in the book.

Another system could just hide reviews that are not verified - and tie into amazon purchases to verify them - I don't know why Amazon would not lean on the fact that they own Goodreads to do this... Make all the reviews visible if the user prompts to see the unverified ones, but as the default just shows the reviews for people that bought the book through Amazon.

replies(3): >>21975435 #>>21976769 #>>21979336 #
4. crazygringo ◴[] No.21975435{3}[source]
Yes, I agree there a few things that could be done to improve, but they all basically involve giving semi-subjective 'weights' as to the reliability of individual reviewers.

E.g. more likely to be genuine if purchased, if not prepublication (except some people really do receive and review books in advance), if has many reviews, if reviews follow common statistical patterns both per-author and per-book, and so on.

The trouble with all of this is just that it's really, really hard to get right. There's a tremendous amount of 'tuning' involved.

It's probably not possible, but it really would be great if someone could come up with some general elegant theory to solve particularly the 'does this reviewer seem statistically trustworthy', in a way that effectively identifies brigading and harrassment, while still allowing for genuine 'oddballs' whose reviews and ratings go against the crowd.

replies(2): >>21976492 #>>21976623 #
5. lazyasciiart ◴[] No.21976492{4}[source]
In this specific case, the book isn't even out for advance readers. The literal least goodreads could do is turn off reviews for the book until the author tells them otherwise.
6. strgcmc ◴[] No.21976623{4}[source]
I swear I'm not just meme-ing for the sake of it, but this has always seemed like fundamentally a decentralized trust problem, and one that can potentially be solved by some form of social blockchain.

Basically, instead of an economic currency unit being mined, the value being protected is instead some form of reputational trust-token; 30 seconds of Googling leads to articles like this: https://www.forbes.com/sites/shermanlee/2018/08/13/a-decentr...

Thinking about things this way essentially boils the fundamental problem into what IMO is a pretty "general elegant theory", which is simply to construct a properly balanced incentive structure, which asymmetrically disincentivizes "bad" behavior while encouraging "good" behavior, in much the same way that Bitcoin's core ledger-validation/mining abstraction rewards miners for securing the network while also discouraging prohibitively expensive attack scenarios.

I'm not saying it's easy or obvious, but I think this is exactly the sort of decentralized trust problem that blockchains are well-suited for.

7. ObsoleteNerd ◴[] No.21976769{3}[source]
> I like the idea of using a captcha that prompts you to enter a random word from a random chapter in the book.

1990s DOS games were ahead of their times I guess.

8. joe_the_user ◴[] No.21977718[source]
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.

I'm scanning my memory banks and Stackoverflow is the only "earned privilege" community that comes to mind and my experience with it has been uniformly unpleasant, let's say "bordering on toxic". If anything, automatically earned privilege creates competition which makes everything worst and nastier.

In contrast, I moderate a medium sized FB group in a topic that often has trolling. We eliminate it entirely through hand-picked moderators and a zero tolerance statement. There's no competition to be a moderator and there's actually little for the moderators to do since making things clear mostly works. So there's no competition for anything and people spend their time discussing issues instead.

HN seems to be closer to that situation also - with karma hidden, competition is pretty limited. And anonymous posters can make fine contributions here.

replies(3): >>21978544 #>>21978644 #>>21980938 #
9. probably_wrong ◴[] No.21978544[source]
> I'm scanning my memory banks and Stackoverflow is the only "earned privilege" community that comes to mind

As far as "positive earned privilege examples", some come to mind:

  * HN, where downvoting requires a certain amount of karma (although there's plenty of human moderation too)

  * MetaFilter [1], which has a reputation of good content due to their one-time $5 charge for signing up.

  * The /r/AskHistorians subreddit, where you only get to answer once you have in-depth knowledge of a specific topic.
[1] https://www.metafilter.com/
10. harrisonjackson ◴[] No.21978644[source]
Becoming a handpicked moderator sounds exactly like an earned privilege.

It doesn't need to be an automated system like StackOverflow, though I do think that is a good starting point for Goodreads and this specific problem.

11. sbarre ◴[] No.21979280[source]
Goodreads has been on life support for years. The community itself has complained about a lack of innovation and updates, and this is just another consequence of a neglected operation.

The real problem here is that Amazon doesn't want to put any money into it.

Your solution makes a lot of sense, but would require effort, and I doubt anyone involved in running GR cares enough to do it.

replies(1): >>21984963 #
12. sbarre ◴[] No.21979336{3}[source]
> One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.

This is an "obvious in hindsight" example but do we expect mods to google search every single name/photo of every review or comment, and then determine if it's legit or not?

This quickly becomes an escalation game where the effort to identify fakes just gets more and more tedious, and it's been repeatedly proven that trolls have way more time and energy to spend on this game than volunteer moderators, and will simply out-grind them to keep up their harassment.

Any solution that involves putting in more human effort than the trolls is likely to fail.

13. edanm ◴[] No.21980938[source]
You bring up HN, but it also has a Stackoverflow like system. It's much lighter, but there are things you can't do as a new user (I believe you can't "flag" or "vouch" for posts).
14. rodgerd ◴[] No.21984963[source]
As an environment for people who care about books, perhaps.

But they completely own any Google search for an author or books, well beyond any dedicated fan forum, publisher, Wikipedia, or the author's own site. They're a cancer on the Internet and search engines.