←back to thread

Anubis Works

(xeiaso.net)
313 points evacchi | 1 comments | | HN request time: 0.204s | source
Show context
gyomu ◴[] No.43668594[source]
If you’re confused about what this is - it’s to prevent AI scraping.

> Anubis uses a proof-of-work challenge to ensure that clients are using a modern browser and are able to calculate SHA-256 checksums

https://anubis.techaro.lol/docs/design/how-anubis-works

This is pretty cool, I have a project or two that might benefit from it.

replies(2): >>43669511 #>>43671745 #
x3haloed ◴[] No.43669511[source]
I’ve been wondering to myself for many years now whether the web is for humans or machines. I personally can’t think of a good reason to specifically try to gate bots when it comes to serving content. Trying to post content or trigger actions could obviously be problematic under many circumstances.

But I find that when it comes to simple serving of content, human vs. bot is not usually what you’re trying to filter or block on. As long as a given client is not abusing your systems, then why do you care if the client is a human?

replies(8): >>43669544 #>>43669558 #>>43669572 #>>43670108 #>>43670208 #>>43670880 #>>43671272 #>>43676454 #
t-writescode ◴[] No.43669544[source]
> I personally can’t think of a good reason to specifically try to gate bots

There's been numerous posts on HN about people getting slammed, to the tune of many, many dollars and terabytes of data from bots, especially LLM scrapers, burning bandwidth and increasing server-running costs.

replies(1): >>43669560 #
ronsor ◴[] No.43669560[source]
I'm genuinely skeptical that those are all real LLM scrapers. For one, a lot of content is in CommonCrawl and AI companies don't want to redo all that work when they can get some WARC files from AWS.

I'm largely suspecting that these are mostly other bots pretending to be LLM scrapers. Does anyone even check if the bots' IP ranges belong to the AI companies?

replies(4): >>43669584 #>>43669780 #>>43669996 #>>43670176 #
userbinator ◴[] No.43669996[source]
Also suspect those working on "anti-bot" solutions may have a hand in this.

What better way to show the effectiveness of your solution, than to help create the problem in the first place.

replies(1): >>43672784 #
zaphar ◴[] No.43672784[source]
Why? When there are 100s of hopeful AI/LLM scrapers more than willing to do that work for you what possible reason would you have to do that work? The more typical and common human behavior is perfectly capable of explaining this. No reason to reach for some kind of underhanded conspiracy theory when simple incompetence and greed is more than adequate to explain it.
replies(1): >>43676087 #
1. userbinator ◴[] No.43676087[source]
CF hosts websites that sell DDoS services.

Google really wants everyone to use its spyware-embedded browser.

There are tons of other "anti-bot" solutions that don't have a conflict of interest with those goals, yet the ones that become popular all seem to further them instead.