←back to thread

Anubis Works

(xeiaso.net)
313 points evacchi | 1 comments | | HN request time: 0s | source
Show context
throwaway150 ◴[] No.43668638[source]
Looks cool. But please help me understand. What's to stop AI companies from solving the challenge, completing the proof of work and scrape websites anyway?
replies(6): >>43668690 #>>43668774 #>>43668823 #>>43668857 #>>43669150 #>>43670014 #
marginalia_nu ◴[] No.43668823[source]
The problem with scrapers in general is the asymmetry of compute resources involved in generating versus requesting a website. You can likely make millions of HTTP requests with the compute required in generating the average response.

If you make it more expensive to request a documents at scale, you make this type of crawling prohibitively expensive. On a small scale it really doesn't matter, but if you're casting an extremely wide net and re-fetching the same documents hundreds of times, yeah it really does matter. Even if you have a big VC budget.

replies(2): >>43669262 #>>43669530 #
charcircuit ◴[] No.43669262[source]
If you make it prohibitively expensive almost no regular user will want to wait for it.
replies(2): >>43669428 #>>43669586 #
1. bobmcnamara ◴[] No.43669428[source]
Exponential backoff!