←back to thread

133 points timshell | 6 comments | | HN request time: 0.923s | source | bottom
Show context
imiric ◴[] No.44378450[source]
I applaud the effort. We need human-friendly CAPTCHAs, as much as they're generally disliked. They're the only solution to the growing spam and abuse problem on the web.

Proof-of-work CAPTCHAs work well for making bots expensive to run at scale, but they still rely on accurate bot detection. Avoiding both false positives and negatives is crucial, yet all existing approaches are not reliable enough.

One comment re:

> While AI agents can theoretically simulate these patterns, the effort likely outweighs other alternatives.

For now. Behavioral and cognitive signals seem to work against the current generation of bots, but will likely also be defeated as AI tools become cheaper and more accessible. It's only a matter of time until attackers can train a model on real human input, and inference to be cheap enough. Or just for the benefit of using a bot on a specific target to outweigh the costs.

So I think we will need a different detection mechanism. Maybe something from the real world, some type of ID, or even micropayments. I'm not sure, but it's clear that bot detection is at the opposite, and currently losing, side of the AI race.

replies(11): >>44378709 #>>44379146 #>>44379545 #>>44380175 #>>44380453 #>>44380659 #>>44380693 #>>44382515 #>>44384051 #>>44387254 #>>44389004 #
1. __MatrixMan__ ◴[] No.44382515[source]
> They're the only solution to the growing spam and abuse problem on the web

They're the only solution that doesn't require a pre-existing trust relationship, but the web is more of a dark forest every day and captchas cannot save us from that. Eventually we're going to have to buckle down and maintain a web of trust.

If you notice abuse, you see which common node caused you to trust the abusers, and you revoke trust in that node (and, transitively, everything that it previously caused you to trust).

replies(1): >>44384614 #
2. imiric ◴[] No.44384614[source]
That might be the way to go. Someone else in the thread mentioned a similar reputation system.

The problem is that such a system could be easily abused or misused. A bad actor could intentionally or mistakenly penalize users, which would have global consequences for those users. So we need a web of trust for the judges as well, and some way of disputing and correcting the mistake.

It would be interesting to prototype it, though, and see how it could work at scale.

replies(3): >>44387293 #>>44389009 #>>44391727 #
3. nhecker ◴[] No.44387293[source]
Hyphanet (formerly Freenet) uses a similar Web of Trust, if you want to see a real-life example in action. Maybe Freenet still uses a WoT as well, I'm not sure.
4. fennecbutt ◴[] No.44389009[source]
Well, apathetic society needs to band together to hold those bad actors to account.

I don't see this ever happening, though.

replies(1): >>44392665 #
5. __MatrixMan__ ◴[] No.44391727[source]
> we need a web of trust for the judges as well

I don't think there should be any judges (or to put it differently, I think every user should be a judge), nor any centralized database, no roots of trust at all. That way it doesn't present any high value targets for corruption to focus on.

The trustworthiness of a user in some domain (won't-DOS-your-page could be a trust domain, writes-honest-product-reviews could be a domain, not-a-scammer, etc) as evaluated by some other individual would have to do with some aggregation of the shortest paths (and their associated trust scores) between those to users on the trust graph.

There is no trust score for user foo, only a trust score for user foo according to user bar. User baz might see foo differently.

If you get scammed, you don't revoke trust in the scammer. Well, you do, but you also go one-hop-back and revoke trust in whoever caused you to trust the scammer. This creates incentives towards trust hygiene. If you don't want people to stop trusting you, then you have to be careful about who you trust. It's a protocol-level proxy for a skill we've been honing for millenia: looking out for each other.

But it doesn't work if there's just a single company that tracks your FICO score or something like that. Either that company ends up being too juicy of a target and ends up itself becoming corrupt, or people attack the weak association between user and company such that the company can't actually tell the difference between a scammer and a legit user (the latter is the case for the credit score companies, hence: identity fraud).

Attacks like that are much harder to pull off if the source of truth isn't some remote database somewhere and is instead based on the set of people you see every day in meatspace.

6. __MatrixMan__ ◴[] No.44392665{3}[source]
I think that most abuse that is prevented by a captcha just isn't worth hunting people down over. If it's susceptible to that kind of abuse in the first place it's a broken protocol.