←back to thread

198 points todsacerdoti | 1 comments | | HN request time: 0s | source
Show context
SquareWheel ◴[] No.45942060[source]
That may work for blocking bad automated crawlers, but an agent acting on behalf of a user wouldn't follow robots.txt. They'd run the risk of hitting the bad URL when trying to understand the page.
replies(2): >>45942461 #>>45942729 #
klodolph ◴[] No.45942461[source]
That sounds like the desired outcome here. Your agent should respect robots.txt, OR it should be designed to not follow links.
replies(1): >>45942613 #
varenc ◴[] No.45942613[source]
An agent acting on my behalf, following my specific and narrowly scoped instructions, should not obey robots.txt because it's not a robot/crawler. Just like how a single cURL request shouldn't follow robots.txt. (It also shouldn't generate any more traffic than a regular browser user)

Unfortunately "mass scraping the internet for training data" and an "LLM powered user agent" get lumped together too much as "AI Crawlers". The user agent shouldn't actually be crawling.

replies(5): >>45942682 #>>45942689 #>>45942743 #>>45942744 #>>45943011 #
mcv ◴[] No.45942743[source]
If it's a robot it should follow robots.txt. And if it's following invisible links it's clearly crawling.

Sure, a bad site could use this to screw with people, but bad sites have done that since forever in various ways. But if this technique helps against malicious crawlers, I think it's fair. The only downside I can see is that Google might mark you as a malware site. But again, they should be obeying robots.txt.

replies(2): >>45942854 #>>45942890 #
varenc ◴[] No.45942854[source]
should cURL follow robots.txt? What makes browser software not a robot? Should `curl <URL>` ignore robots.txt but `curl <URL> | llm` respect it?

The line gets blurrier with things like OAI's Atlas browser. It's just re-skinned Chromium that's a regular browser, but you can ask an LLM about the content of the page you just navigated to. The decision to use an LLM on that page is made after the page load. Doing the same thing but without rendering the page doesn't seem meaningfully different.

In general robots.txt is for headless automated crawlers fetching many pages, not software performing a specific request for a user. If there's 1:1 mapping between a user's request and a page load, then it's not a robot. An LLM powered user agent (browser) wouldn't follow invisible links, or any links, because it's not crawling.

replies(1): >>45943425 #
mcv ◴[] No.45943425[source]
How did you get the url for curl? Do you personally look for hidden links in pages to follow? This isn't an issue for people looking at the page, it's only a problem for systems that automatically follow all the links on a page.
replies(1): >>45950881 #
1. varenc ◴[] No.45950881[source]
Yea i think the context for my reply got lost. I was responding to someone saying that an LLM powered user-agent (browser) should respect robots.txt. And it wouldn't be clicking the hidden link because it's not crawling.