←back to thread

131 points timshell | 3 comments | | HN request time: 0.748s | source
Show context
imiric ◴[] No.44378450[source]
I applaud the effort. We need human-friendly CAPTCHAs, as much as they're generally disliked. They're the only solution to the growing spam and abuse problem on the web.

Proof-of-work CAPTCHAs work well for making bots expensive to run at scale, but they still rely on accurate bot detection. Avoiding both false positives and negatives is crucial, yet all existing approaches are not reliable enough.

One comment re:

> While AI agents can theoretically simulate these patterns, the effort likely outweighs other alternatives.

For now. Behavioral and cognitive signals seem to work against the current generation of bots, but will likely also be defeated as AI tools become cheaper and more accessible. It's only a matter of time until attackers can train a model on real human input, and inference to be cheap enough. Or just for the benefit of using a bot on a specific target to outweigh the costs.

So I think we will need a different detection mechanism. Maybe something from the real world, some type of ID, or even micropayments. I'm not sure, but it's clear that bot detection is at the opposite, and currently losing, side of the AI race.

replies(11): >>44378709 #>>44379146 #>>44379545 #>>44380175 #>>44380453 #>>44380659 #>>44380693 #>>44382515 #>>44384051 #>>44387254 #>>44389004 #
JimDabell ◴[] No.44378709[source]
> So I think we will need a different detection mechanism. Maybe something from the real world, some type of ID, or even micropayments. I'm not sure, but it's clear that bot detection is at the opposite, and currently losing, side of the AI race.

I think the most likely long-term solution is something like DIDs.

https://en.wikipedia.org/wiki/Decentralized_identifier

A small number of trusted authorities (e.g. governments) issue IDs. Users can identify themselves to third-parties without disclosing their real-world identity to the third-party and without disclosing their interaction with the third-party to the issuing body.

The key part of this is that the identity is persistent. A website might not know who you are, but they know when it’s you returning. So if you get banned, you can’t just register a new account to evade the ban. You’d need to do the equivalent of getting a new passport from your government.

replies(7): >>44378752 #>>44379158 #>>44379293 #>>44379764 #>>44381669 #>>44382394 #>>44387968 #
BiteCode_dev ◴[] No.44379764[source]
But this mean that now a saas baning you from your account for spurious reason can be a serious problem.
replies(2): >>44380206 #>>44383506 #
1. JimDabell ◴[] No.44383506[source]
That’s the point. Bans should be effective.
replies(1): >>44385932 #
2. BiteCode_dev ◴[] No.44385932[source]
I get it. And also, I know that Apple and Google would abuse that, and destroy lives and businesses as casually as I eat my breakfast. Then 1000's of disposable companies would pop up with valid id, and abuse some system (like terrible DMCA) and make it worse.

If you think people self-censoring themselves on social media is now a problem (the "unlive" novlang is always such a dystopic hint to me), you have seen nothing.

replies(1): >>44387300 #
3. JimDabell ◴[] No.44387300[source]
Businesses should not be forced to serve abusive users. They should have the choice to refuse to serve somebody permanently. You do not have the right to use somebody else’s service without their permission. If they want you off their platform, they should be able to do so.

The whole point of having trusted issuers is that there aren’t any “disposable companies” who hand out many identities in an uncontrolled manner. If there were, they would quickly become untrusted, making the IDs worthless.