←back to thread

Anubis Works

(xeiaso.net)
313 points evacchi | 1 comments | | HN request time: 0.2s | source
Show context
throwaway150 ◴[] No.43668638[source]
Looks cool. But please help me understand. What's to stop AI companies from solving the challenge, completing the proof of work and scrape websites anyway?
replies(6): >>43668690 #>>43668774 #>>43668823 #>>43668857 #>>43669150 #>>43670014 #
ndiddy ◴[] No.43668857[source]
This makes it much more expensive for them to scrape because they have to run full web browsers instead of limited headless browsers without full Javascript support like they currently do. There's empirical proof that this works. When GNOME deployed it on their Gitlab, they found that around 97% of the traffic in a given 2.5 hour period was blocked by Anubis. https://social.treehouse.systems/@barthalion/114190930216801...
replies(1): >>43669357 #
dragonwriter ◴[] No.43669357[source]
> This makes it much more expensive for them to scrape because they have to run full web browsers instead of limited headless browsers without full Javascript support like they currently do. There's empirical proof that this works.

It works in the short term, but the more people that use it, the more likely that scrapers start running full browsers.

replies(3): >>43669452 #>>43669570 #>>43670120 #
1. SuperNinKenDo ◴[] No.43670120[source]
That's the point. An individual user doesn't lose sleep over using a full browser, that's exactly how they use the web anyway, but for an LLM scraper or similar, this greatly increases costs on their end and partially thereby rebalances the power/cost imbalance, and at the very least, encourages innovations to make the scrapers externalise costs less by not rescraping things over and over again just because you're too lazy, and the weight of doing so is born by somebody else, not you. It's an incentive correction for the commons.