←back to thread

135 points timshell | 1 comments | | HN request time: 0.21s | source
Show context
imiric ◴[] No.44378450[source]
I applaud the effort. We need human-friendly CAPTCHAs, as much as they're generally disliked. They're the only solution to the growing spam and abuse problem on the web.

Proof-of-work CAPTCHAs work well for making bots expensive to run at scale, but they still rely on accurate bot detection. Avoiding both false positives and negatives is crucial, yet all existing approaches are not reliable enough.

One comment re:

> While AI agents can theoretically simulate these patterns, the effort likely outweighs other alternatives.

For now. Behavioral and cognitive signals seem to work against the current generation of bots, but will likely also be defeated as AI tools become cheaper and more accessible. It's only a matter of time until attackers can train a model on real human input, and inference to be cheap enough. Or just for the benefit of using a bot on a specific target to outweigh the costs.

So I think we will need a different detection mechanism. Maybe something from the real world, some type of ID, or even micropayments. I'm not sure, but it's clear that bot detection is at the opposite, and currently losing, side of the AI race.

replies(11): >>44378709 #>>44379146 #>>44379545 #>>44380175 #>>44380453 #>>44380659 #>>44380693 #>>44382515 #>>44384051 #>>44387254 #>>44389004 #
__MatrixMan__ ◴[] No.44382515[source]
> They're the only solution to the growing spam and abuse problem on the web

They're the only solution that doesn't require a pre-existing trust relationship, but the web is more of a dark forest every day and captchas cannot save us from that. Eventually we're going to have to buckle down and maintain a web of trust.

If you notice abuse, you see which common node caused you to trust the abusers, and you revoke trust in that node (and, transitively, everything that it previously caused you to trust).

replies(1): >>44384614 #
imiric ◴[] No.44384614[source]
That might be the way to go. Someone else in the thread mentioned a similar reputation system.

The problem is that such a system could be easily abused or misused. A bad actor could intentionally or mistakenly penalize users, which would have global consequences for those users. So we need a web of trust for the judges as well, and some way of disputing and correcting the mistake.

It would be interesting to prototype it, though, and see how it could work at scale.

replies(3): >>44387293 #>>44389009 #>>44391727 #
fennecbutt ◴[] No.44389009[source]
Well, apathetic society needs to band together to hold those bad actors to account.

I don't see this ever happening, though.

replies(1): >>44392665 #
1. __MatrixMan__ ◴[] No.44392665[source]
I think that most abuse that is prevented by a captcha just isn't worth hunting people down over. If it's susceptible to that kind of abuse in the first place it's a broken protocol.