It feels odd because I find I'm writing code to detect anti-bot tools even though I'm trying my best to follow conventions.
It feels odd because I find I'm writing code to detect anti-bot tools even though I'm trying my best to follow conventions.
Never assume malice what can be attributed to incompetence.
Gating robots.txt might be a mistake, but it also might be a quick way to deal with crawlers who mine robots.txt for pages that are more interesting. It's also a page that's never visited by humans. So if you make it a tarpit, you both refuse to give the bot more information and slow it down.
It's crap that it's affecting your work, but a website owner isn't likely to care about the distinction when they're pissed off at having to deal with bad actors that they should never have to care about.
Never is a strong word. I have definitely visited robots.txt of various websites for a variety of random reasons.
- remembering the format
- seeing what they might have tried to "hide"
- using it like a site's directory
- testing if the website is working if their main dashboard/index is offline
In fairness, however, my daughters ask me that question all the time and it is possible that the verification checkboxes are lying to me as part of some grand conspiracy to make me think I am a human when I am not.
I'd treat this in a client the same way as I do in a server application. If the peer is behaving maliciously or improperly, I silently drop the TCP connection without notifying the other party. They can waste their resources by continuing to send bytes for the next few minutes until their own TCP stack realizes what happens.
Additionally, it's not going to be using that many resources before your kernel sends it a RST next time a data packet is sent
--- though I think passing them is more a sign that you're a robot than anything else.