←back to thread

270 points ilamont | 2 comments | | HN request time: 0.55s | source
1. oehpr ◴[] No.21975294[source]
I find topics like this very interesting.

Fake reviews on Amazon, vote bombing on youtube, fake upvotes on reddit, fake likes on facebook. I'd even extend this tangentially to cyber squatting DNS records, email spam, and robo callers.

It's a matter of trust. We think we can just make a big polling station somewhere and get the communities opinion on something, one person one vote. "Here's the big central aggregate, take the law of averages and you have a good sense of how good something is". But on the internet the Sybil attack reigns supreme. This assumption doesn't hold.

Whenever I read articles on problems like these the topic of "Why it's happening and how to fix it" invariable drifts to "We need better moderation". I never think that. I think "Stop trusting Sybil". Not even THAT! Stop asking me to trust random people I don't trust!

What if instead of having an single aggregate review, we "web of trust" it instead? Here's the system as I imagine it, and it's not fully thought out at this point, please chime in with criticisms if you have them.

The jist: Every user has their own list of ratings for books in the system, and a list of people they "trust". The rated list of books you see the average rating of the people you trust, recursing out through the web of trust. The ratings are calculated live, on demand. With the only thing held in constant being each individuals personal ratings.

I'm leaving a lot undefined right now, but an example of how I imagine such a system to work:

Lets say you know Harry Potter and the Chamber of Secrets is the perfect book. You go to the books page and review the ratings. Your rate is obviously 5. You see lots of other people in your list who have rated it 5 (naturally). But you also see that there are some 3's in there. You can click on the 3's and see how they reached you through the web of trust. It turns out a bunch of 3's are coming from your someone named Bob, Bob trusts Alice, and Alice has trusted a bunch of people from some kind of Harry Potter hating cabal of philistines. You could blacklist the cabal, but there's too many of them, so instead you blacklist Alice. All the 3's are gone, and so are all the other reviews from the cabal. Your list of reviews now more accurately reflects views that you would trust.

In this way, your effort to moderate away the opinions of people you don't trust has done double duty, it has cleared views you disagree with and do not trust, and it has also done so for the people who trust you as well.

It is decentralized moderation.

There's some challenges here, for example, how to bootstrap the system for new users? What is the best way to calculate the average reviews in a timely manner? How much weight should be applied to a friend of a friend of a friend? What kind of feedback could we give to users to incentivize them against trusting people like the philistine cabal? etc.

I have some thoughts on this, I'll spare you.

Like any decentralized system, it's more work, it's more complex, and it has some surprising and ugly edge cases. I also don't think I'm some genius for coming up for this, it's just an application of Web of Trust. But I have not seen a system such as this in practice. Nor do I ever seem to see people talk about it when the topic of moderation and spam come up. If you know of any such case studies, let me know!

replies(1): >>21976327 #
2. inimino ◴[] No.21976327[source]
Generally the problem with systems like this is that you need a critical mass of users to rate books for a site to have any value. If the only ratings you see are from people you trust, this becomes a lot worse. Once you add UI friction, adoption drops even more. Generally because of network effects, sites that use dark patterns and prioritize engagement over trustworthiness will tend to thrive. Just look at the history of social media.