←back to thread

646 points blendergeek | 2 comments | | HN request time: 0s | source
Show context
quchen ◴[] No.42725651[source]
Unless this concept becomes a mass phenomenon with many implementations, isn’t this pretty easy to filter out? And furthermore, since this antagonizes billion-dollar companies that can spin up teams doing nothing but browse Github and HN for software like this to prevent polluting their datalakes, I wonder whether this is a very efficient approach.
replies(9): >>42725708 #>>42725957 #>>42725983 #>>42726183 #>>42726352 #>>42726426 #>>42727567 #>>42728923 #>>42730108 #
btilly ◴[] No.42725983[source]
It would be more efficient for them to spin up a team to study this robots.txt thing. They've ignored that low hanging fruit, so they won't do the more sophisticated thing any time soon.
replies(1): >>42726821 #
1. tgv ◴[] No.42726821[source]
You can't make money out of studying robots.txt, but you can avoid costs skipping bad web sites.
replies(1): >>42730213 #
2. xeromal ◴[] No.42730213[source]
Sounds like a benefit for the site owner. lol. It accomplished what they wanted.