←back to thread

270 points ilamont | 1 comments | | HN request time: 0.203s | source
Show context
harrisonjackson ◴[] No.21974392[source]
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.

Also, the ability to whitelist an author or book for extra moderation seems like a no-brainer. After there is evidence of harassment then all user content needs to be approved before it is made public. Enable trusted moderators from the community to help with this if paid moderators cannot keep up.

This seems like it could get so so much worse than it currently is. The target of harassment seems to be taking it well but what happens on a platform like this to someone that isn't as prepared to deal with it?

replies(3): >>21975018 #>>21977718 #>>21979280 #
crazygringo ◴[] No.21975018[source]
I think there's a big difference between moderation according to standards, and false reviews.

It's comparatively easy to determine if any single post is using banned language, is abusive, etc. The single post can then be removed.

False reviews, on the other hand, are virtually impossible to identify individually. People's opinions on a book legitimately differ. There isn't an obvious way to distinguish between a review that's part of a harrassment campaign or paid brigade, versus one that's genuine. It's only in aggregate that something seems to be wrong -- but how do you fix it? How do you select which individual reviews get removed?

Moderation is not really a solution here, because all individual reviews will be approved.

replies(1): >>21975293 #
harrisonjackson ◴[] No.21975293[source]
One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.

It is definitely a difficult problem - I'll agree with you there. There are some other good suggestions in the thread on making it easier to flag the false reviews/moderate reviews beyond "community standards"

I like the idea of using a captcha that prompts you to enter a random word from a random chapter in the book.

Another system could just hide reviews that are not verified - and tie into amazon purchases to verify them - I don't know why Amazon would not lean on the fact that they own Goodreads to do this... Make all the reviews visible if the user prompts to see the unverified ones, but as the default just shows the reviews for people that bought the book through Amazon.

replies(3): >>21975435 #>>21976769 #>>21979336 #
1. sbarre ◴[] No.21979336[source]
> One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.

This is an "obvious in hindsight" example but do we expect mods to google search every single name/photo of every review or comment, and then determine if it's legit or not?

This quickly becomes an escalation game where the effort to identify fakes just gets more and more tedious, and it's been repeatedly proven that trolls have way more time and energy to spend on this game than volunteer moderators, and will simply out-grind them to keep up their harassment.

Any solution that involves putting in more human effort than the trolls is likely to fail.