It feels odd because I find I'm writing code to detect anti-bot tools even though I'm trying my best to follow conventions.
It feels odd because I find I'm writing code to detect anti-bot tools even though I'm trying my best to follow conventions.
Gating robots.txt might be a mistake, but it also might be a quick way to deal with crawlers who mine robots.txt for pages that are more interesting. It's also a page that's never visited by humans. So if you make it a tarpit, you both refuse to give the bot more information and slow it down.
It's crap that it's affecting your work, but a website owner isn't likely to care about the distinction when they're pissed off at having to deal with bad actors that they should never have to care about.
Never is a strong word. I have definitely visited robots.txt of various websites for a variety of random reasons.
- remembering the format
- seeing what they might have tried to "hide"
- using it like a site's directory
- testing if the website is working if their main dashboard/index is offline
In fairness, however, my daughters ask me that question all the time and it is possible that the verification checkboxes are lying to me as part of some grand conspiracy to make me think I am a human when I am not.
--- though I think passing them is more a sign that you're a robot than anything else.