←back to thread

211 points CrankyBear | 1 comments | | HN request time: 0.199s | source
Show context
giancarlostoro ◴[] No.45106227[source]
I'm not sure why they don't just cache the websites and avoid going back for at least 24 hours, especially in the case of most sites. I swear its like we're re-learning software engineering basics with LLMs / AI and it kills me.
replies(8): >>45106404 #>>45106430 #>>45106554 #>>45107000 #>>45107104 #>>45107170 #>>45107187 #>>45112971 #
kpw94 ◴[] No.45106404[source]
Yeah the landscpe when there were many more Search engines must have been exactly the same...

I think the eng teams behind those were just more competent / more frugal on their processing.

And since there wasn't any AWS equivalent, they had to be better citizens since well-known IP range ban for the crawled websites was trivial.

replies(3): >>45106664 #>>45107746 #>>45109209 #
1. danudey ◴[] No.45109209[source]
It's worth noting that search engines back then (and now? except the AI ones) generally tended to follow robots.txt, which meant that if there were heavy areas of your site that you didn't want them to index you could filter them out and let them just follow static pages. You could block off all of /cgi-bin/ for example and then they would never be hitting your CGI scripts - useful if your guestbook software wrote out static files to be served, for example.

The search engines were also limited in resources, so they were judicious about what they fetched, when, and how often; optimizing their own crawlers saved them money, and in return it also saved the websites too. Even with a hundred crawlers actively indexing your site, they weren't going to index it more than, say, once a day, and 100 requests in a day isn't really that much even back then.

Now, companies are pumping billions of dollars into AI; budgets are infinite, limits are bypassed, and norms are ignored. If the company thinks it can benefit from indexing your site 30 times a minute then it will, but even if it doesn't benefit from it there's no reason for them to stop it from doing so because it doesn't cost them anything. They cannot risk being anything other than up-to-date, because if users are coming to you asking about current events and why space force is moving to Alabama and your AI doesn't know but someone else's does, then you're behind the times.

So in the interests of maximizing short-term profit above all else - which is the only thing AI companies are doing in any way shape or form - they may as well scrape every URL on your site once per second, because it doesn't cost them anything and they don't care if you go bankrupt and shut down.