Most active commenters
  • hinkley(4)
  • rollcat(3)
  • kiitos(3)
  • NegativeK(3)
  • SoftTalker(3)
  • 0x457(3)

←back to thread

597 points classichasclass | 65 comments | | HN request time: 0.02s | source | bottom
Show context
bob1029 ◴[] No.45011628[source]
I think a lot of really smart people are letting themselves get taken for a ride by the web scraping thing. Unless the bot activity is legitimately hammering your site and causing issues (not saying this isn't happening in some cases), then this mostly amounts to an ideological game of capture the flag. The difference being that you'll never find their flag. The only thing you win by playing is lost time.

The best way to mitigate the load from diffuse, unidentifiable, grey area participants is to have a fast and well engineered web product. This is good news, because your actual human customers would really enjoy this too.

replies(7): >>45011652 #>>45011830 #>>45011850 #>>45012424 #>>45012462 #>>45015038 #>>45015451 #
1. phito ◴[] No.45011652[source]
My friend has a small public gitea instance, only use by him a a few friends. He's getting thousounds of requests an hour from bots. I'm sorry but even if it does not impact his service, at the very least it feels like harassment
replies(7): >>45011694 #>>45011816 #>>45011999 #>>45013533 #>>45013955 #>>45014807 #>>45025114 #
2. dmesg ◴[] No.45011694[source]
Yes and it makes reading your logs needlessly harder. Sometimes I find an odd password being probed, search for it on the web and find an interesting story, that a new backdoor was discovered in a commercial appliance.

In that regard reading my logs led me sometimes to interesting articles about cyber security. Also log flooding may result in your journaling service truncating the log and you miss something important.

replies(3): >>45011747 #>>45011811 #>>45012470 #
3. wvbdmp ◴[] No.45011747[source]
You log passwords?
replies(4): >>45013224 #>>45014657 #>>45014868 #>>45018054 #
4. ◴[] No.45011811[source]
5. bob1029 ◴[] No.45011816[source]
Thousands of requests per hour? So, something like 1-3 per second?

If this is actually impacting perceived QoS then I think a gitea bug report would be justified. Clearly there's been some kind of a performance regression.

Just looking at the logs seems to be an infohazard for many people. I don't see why you'd want to inspect the septic tanks of the internet unless absolutely necessary.

replies(5): >>45014694 #>>45014705 #>>45015142 #>>45016540 #>>45019745 #
6. wraptile ◴[] No.45011999[source]
> thousounds of requests an hour from bots

That's not much for any modern server so I genuinely don't understand the frustration. I'm pretty certain gitea should be able to handle thousands of read requests per minute (not per hour) without even breaking a sweat.

replies(3): >>45012092 #>>45016557 #>>45019778 #
7. q3k ◴[] No.45012092[source]
Serving file content/diff requests from gitea/forgejo is quite expensive computationally. And these bots tend to tarpit themselves when they come across eg. a Linux repo mirror.

https://social.hackerspace.pl/@q3k/114358881508370524

replies(2): >>45012546 #>>45015199 #
8. rollcat ◴[] No.45012470[source]
> Sometimes I find an odd password being probed, search for it on the web and find an interesting story [...].

Yeah, this is beyond irresponsible. You know the moment you're pwned, __you__ become the new interesting story?

For everyone else, use a password manager to pick a random password for everything.

replies(1): >>45012625 #
9. rollcat ◴[] No.45012546{3}[source]
I think at this point every self-hosted forge should block diffs from anonymous users.

Also: Anubis and go-away, but also: some people are on old browsers or underpowered computers.

10. Thorrez ◴[] No.45012625{3}[source]
What is beyond irresponsible? Monitoring logs and researching odd things found there?
replies(2): >>45013099 #>>45013232 #
11. JohnFen ◴[] No.45013099{4}[source]
How are passwords ending up in your logs? Something is very, very wrong there.
replies(2): >>45013284 #>>45014850 #
12. ◴[] No.45013224{3}[source]
13. rollcat ◴[] No.45013232{4}[source]
The way to handle a password:

    plaintextPassword = POST["password"]
    ok = bcryptCompare(hashedPassword, plaintextPassword)
    // (now throw away POST and plaintextPassword)
    if (ok) { ... }
Bonus points: on user lookup, when no user is found, fetch a dummy hashedPassword, compare, and ignore the result. This will partially mitigate username enumeration via timing attacks.
replies(1): >>45014452 #
14. dmesg ◴[] No.45013284{5}[source]
Does an attacking bot know your webserver is not a misconfigured router exposing its web interface to the net? I often am baffled what conclusions people come up with from half reading posts. I had bots attack me with SSH 2.0 login attempts on port 80 and 443. Some people underestimate how bad at computer science some skids are.
replies(3): >>45014297 #>>45015462 #>>45019689 #
15. ralferoo ◴[] No.45013533[source]
What's worse is when you get bots blasting HTTP traffic at every open port, even well known services like SMTP. Seriously, it's a mail server. It identified itself as soon as the connection was opened, if they waited 100ms-300ms before spamming, they'd know that it wasn't HTTP because the other side wouldn't send anything at all if it was. There's literally no need to bombard a mail server on a well known port by continuing to send a load of junk that's just going to fill someone's log file.
replies(2): >>45014905 #>>45015499 #
16. immibis ◴[] No.45013955[source]
I have a small public gitea instance that got thousands of requests per hour from bots.

I encountered exactly one actual problem: the temporary folder for zip snapshots filled up the disk since bots followed all snapshot links and it seems gitea doesn't delete generated snapshots. I made that directory read-only, deleted its contents, and the problem was solved, at the cost of only breaking zip snapshots.

I experienced no other problems.

I did put some user-agent checks in place a while later, but that was just for fun to see if AI would eventually ingest false information.

17. socksy ◴[] No.45014297{6}[source]
Also baffled that three separate people came to that conclusion. Do they not run web servers on the open web or something? Script kiddies are constantly probing urls, and urls come up in your logs. Sure it would be bad if that was how your app was architected. But it's not how it's architected, it's how the skids hope your app is architected. It's not like if someone sends me a request for /wp-login.php that my rails app suddenly becomes WordPress??
replies(2): >>45014904 #>>45019187 #
18. Sophira ◴[] No.45014452{5}[source]
I believe you may have misinterpreted the comment. They're not talking about logs that were made from a login form on their website. They're talking about generic logs (sometimes not even web server logs) being generated because of bots that are attempting to find vulnerabilities on random pages. Pages that don't even exist or may not even be applicable on this server.
19. zeta0134 ◴[] No.45014657{3}[source]
Just about nobody logs passwords on purpose. But really stupid IoT devices accept credentials as like query strings, or part of the path or something, and it's common to log those. The attacker is sending you passwords meant for a much less secure system.
replies(1): >>45015432 #
20. zeta0134 ◴[] No.45014694[source]
One of the most common issues we helped customers solve when I worked in web hosting was low disk alerts, usually because the log rotation had failed. Often the content of those logs was exactly this sort of nonsense and had spiked recently due to a scraper. The sheer size of the logs can absolutely be a problem on a smaller server, which is more and more common now that the inexpensive server is often a VM or a container.
21. tedivm ◴[] No.45014705[source]
Depending on what they're actually pulling down this can get pretty expensive. Bandwidth isn't free.
22. kiitos ◴[] No.45014807[source]
every single IPv4 address in existence receives constant malicious traffic, from uncountably many malicious actors, on all common service ports (80, 443, 22, etc.) and, for HTTP specifically, to an enormous and growing number of common endpoints (mostly WordPress related, last I checked)

if you put your server up on the public internet then this is just table stakes stuff that you always need to deal with, doesn't really matter whether the traffic is from botnets or crawlers or AI systems or anything else

you're always gonna deal with this stuff well before the requests ever get to your application, with WAFs or reverse proxies or (idk) fail2ban or whatever else

also 1000 req/hour is around 1 request every 4 seconds, which is statistically 0 rps for any endpoint that would ever be publicly accessible

replies(2): >>45015080 #>>45015487 #
23. hvb2 ◴[] No.45014850{5}[source]
If the caller puts it in the query string and you log that? It doesn't have to be valid in your application to make an attacker pass it in.

So unless you're not logging your request path/query string you're doing something very very wrong by your own logic :). I can't imagine diagnosing issues with web requests and not be given the path + query string. You can diagnose without but you're sure not making things easier

24. stronglikedan ◴[] No.45014868{3}[source]
Sure, why not. Log every secret you come across (or that comes across you). Just don't log your own secrets. Like OP said, it lead down some interesting trails.
25. JohnFen ◴[] No.45014904{7}[source]
> Do they not run web servers on the open web or something?

Until AI crawlers chased me off of the web, I ran a couple of fairly popular websites. I just so rarely see anybody including passwords in the URLs anymore that I didn't really consider that as what the commenter was talking about.

replies(1): >>45018636 #
26. JdeBP ◴[] No.45014905[source]
I remember putting dummy GET/PUT/HEAD/POST verbs into SMTP Relay softwares a quarter of a century ago. Attackers do not really save themselves time and money by being intelligent about this. So they aren't.

There are attackers out there that send SIP/2.0 OPTIONS requests to the GOPHER port, over TCP.

27. NegativeK ◴[] No.45015080[source]
I've heard this point raised elsewhere, and I think it's underplaying the magnitude of the issue.

Background scanner noise on the internet is incredibly common, but the AI scraping is not at the same level. Wikipedia has published that their infrastructure costs have notably shot up since LLMs started scraping them. I've seen similar idiotic behavior on a small wiki I run; a single AI company took the data usage from "who gives a crap" to "this is approaching the point where I'm not willing to pay to keep this site up." Businesses can "just" pass the costs onto the customers (which is pretty shit at the end of the day,) but a lot of privately run and open source sites are now having to deal with side crap that isn't relevant to their focus.

The botnets and DDOS groups that are doing mass scanning and testing are targeted by law enforcement and eventually (hopefully) taken down, because what they're doing is acknowledged as bad.

AI companies, however, are trying to make a profit off of this bad behavior and we're expected to be okay with it? At some point impacting my services with your business behavior goes from "it's just the internet being the internet" to willfully malicious.

replies(3): >>45015235 #>>45018955 #>>45045233 #
28. dkiebd ◴[] No.45015142[source]
I love the snark here. I work at a hosting company and the only customers who have issues with crawlers are those who have stupidly slow webpages. It’s hard to have any sympathy for them.
replies(1): >>45018800 #
29. diggan ◴[] No.45015199{3}[source]
> Serving file content/diff requests from gitea/forgejo is quite expensive computationally

One time, sure. But unauthenticated requests would surely be cached, authenticated ones skip the cache (just like HN works :) ), as most internet-facing websites end up using this pattern.

replies(2): >>45015854 #>>45018346 #
30. kiitos ◴[] No.45015235{3}[source]
this is a completely fair point, it may be the case that AI scraper bots have recently made the magnitude and/or details of unwanted bot traffic to public IP addresses much worse

but yeah the issue is that as long as you have something accessible to the public, it's ultimately your responsibility to deal with malicious/aggressive traffic

> At some point impacting my services with your business behavior goes from "it's just the internet being the internet" to willfully malicious.

I think maybe the current AI scraper traffic patterns are actually what "the internet being the internet" is from here forward

replies(1): >>45059613 #
31. SoftTalker ◴[] No.45015432{4}[source]
You probably shouldn't log usernames then, or really any form fields, as users might accidentally enter a password into one of them. Kind of defeats the point of web forms, but safety is important!
replies(2): >>45018570 #>>45019660 #
32. SoftTalker ◴[] No.45015462{6}[source]
Running ssh on 80 or 443 is a way to get around boneheaded firewalls that allow http(s) but block ssh, so it's not completely insane to see probes for it.
33. sidewndr46 ◴[] No.45015487[source]
I was kind of amazed to learn that apparently if you connect Windows NT4/98/2000/ME to a public IPv4 address it gets infected by what is a period correct worm in no time at all. I don't mean that someone uses an RCE to turn it into part of a botnet (that is expected), apparently there are enough infected hosts from 20+ years ago still out there that the sasser worm is still spreading.
replies(1): >>45018039 #
34. sidewndr46 ◴[] No.45015499[source]
It's even funnier when you realize it is a request for a known exploit in WordPress. Does someone really run that on port 22?
replies(1): >>45016435 #
35. q3k ◴[] No.45015854{4}[source]
You can't feasibly cache large reposotories' diffs/content-at-version without reimplementing a significant part of git - this stuff is extremely high cardinality and you'd just constantly thrash the cache the moment someone does a BFS/DFS through available links (as these bots tend to do).
36. Sohcahtoa82 ◴[] No.45016435{3}[source]
I HAVE heard of someone that runs SSH on port 443 and HTTPS on 22.

It blocks a lot of bots, but I feel like just running on a high port number (10,000+) would likely do better.

replies(1): >>45021476 #
37. p3rls ◴[] No.45016540[source]
i usually get 10 a second hitting the same content pages 10 times an hour, is that not what you guys are getting from google bot?
38. p3rls ◴[] No.45016557[source]
and this is how the entire web was turned into wordpress slop and cryptoscams
39. hugo1789 ◴[] No.45018039{3}[source]
I still remember how we installed Windows PCs at home if no media with the latest service pack was available. Install Windows, download service pack, copy it away, disconnect from internet, throw away everything and install Windows again...
40. dpkirchner ◴[] No.45018054{3}[source]
I remember back before ssh was a thing folks would log login attempts -- it was easy to get some people's passwords because it was common for them to accidentally use them as the username (which are always safe to log, amirite?). All you had to do was watch for a failed login followed by a successful login from the same IP.
41. Sesse__ ◴[] No.45018346{4}[source]
There are _lots_ of objects in a large git repository. E.g., I happen to have a fork of VLC lying around. VLC has 70k+ commits (on that version). Each commit has about 10k files. The typical AI crawler wants, for every commit, to download every file (so 700M objects), every tarball (70k+ .tar.gz files), and the blame layer of every file (700M objects, where blame has to look back on average 35k commits). Plus some more.

Saying “just cache this” is not sustainable. And this is only one repository; the only reasonable way to deal with this is some sort of traffic mitigation, you cannot just deal with the traffic as the happy path.

42. Dylan16807 ◴[] No.45018570{5}[source]
Are you using a very weird definition of "logging" to make a joke? Web forms don't need any logging to work.
replies(1): >>45034446 #
43. viridian ◴[] No.45018636{8}[source]
Just about every crawler that tries probing for wordpress vulnerabilities does this, or includes them in the naked headers as a part of their deluge of requests.
44. egypturnash ◴[] No.45018800{3}[source]
Isn't it part of your job to help them fix that?
replies(1): >>45019092 #
45. 0x457 ◴[] No.45018955{3}[source]
So weird to scrape wikipedia when you can just download db dumb from them.
replies(2): >>45019746 #>>45029833 #
46. 0x457 ◴[] No.45019092{4}[source]
How? They are hosting company, not a webshop.
47. maxbond ◴[] No.45019187{7}[source]
> It's not like if someone sends me a request for /wp-login.php that my rails app suddenly becomes WordPress??

You're absolutely right. That's my mistake — you are requesting a specific version of WordPress, but I had written a Rails app. I've rewritten the app as a WordPress plugin and deployed it. Let me know if there's anything else I can do for you.

48. hinkley ◴[] No.45019660{5}[source]
So no access logs at all then? That sounds effective.
49. hinkley ◴[] No.45019689{6}[source]
I recall finding weird URLs in my access logs way back when where someone was trying to hit my machine with the CodeRed worm, a full decade after it was new. That was surreal.
50. hinkley ◴[] No.45019745[source]
We were only getting 60% of our from bots at my last place because we throttled a bunch of sketchy bots to around 50 simultaneous requests. Which was on the order of 100/s. Our customers were paying for SEO so the bot traffic was a substantial cost of doing business. But as someone tasked with decreasing cluster size I was forever jealous of the large amount of cluster thatwasn’t being seen by humans.
51. xp84 ◴[] No.45019746{4}[source]
Really makes you think about the calibre of minds being applied to buzzy problem spaces these days, doesn't it?
replies(1): >>45020373 #
52. hinkley ◴[] No.45019778[source]
We were seeing over a million hits per hour from bots and I agree with GP. It’s fucking out of control. And it’s 100x worse at least if you sell vanity URLs, because the good bots cannot tell that they’re sending you 100 simultaneous requests by throttling on one domain and hitting five others instead.
53. socalgal2 ◴[] No.45020373{5}[source]
do we know they didn't download the DB? Maybe the new traffic is the LLM reading the site? (not the training)

I don't know that LLMs read sites. I only know when I use one it tells me it's checking site X, Y, Z, thinking about the results, checking sites A, B, C etc.... I assumed it was actually reading the site on my behalf and not just referring to its internal training knowledge.

Like how people are training LLMs, and how often does each one scrap? From the outside, it feels like the big ones (ChatGPT, Gemini, Claude, etc..) scrape only a few times a year at most.

replies(1): >>45031480 #
54. mjmas ◴[] No.45021476{4}[source]
I have a service running on a high port number on just a straight IPv4 and it does get a bit of bot traffic, but they are generally easy to filter out when looking at logs (well behaved ones have a domain in their User-Agent and bingbot takes my robots.txt into account. I dont think I've seen the Google crawler. Other bots can generally be worked out as anything that didn't request my manifest.json a few seconds after loading the main page)
55. integralid ◴[] No.45025114[source]
Thousands per hour is 0.3-3 requests per second, which is... not a lot? I host a personal website and it got much more noise before LLMs were even a thing.
56. nitwit005 ◴[] No.45029833{4}[source]
When you have a pile of funding, and you get told to do things quickly.
replies(1): >>45031767 #
57. xp84 ◴[] No.45031480{6}[source]
I would guess site operators can tell the difference between an exhaustive crawl and the targeted specific traffic I'd expect to see from an LLM checking sources on-demand. For one thing, the latter would have time-based patterns attributable to waking hours in the relevant parts of the world, whereas the exhaustive crawl traffic would probably be pretty constant all day and night.

Also to be clear I doubt those big guys are doing these crawls. I assume it's small startups who think they're gonna build a big dataset to sell or to train their own model.

58. 0x457 ◴[] No.45031767{5}[source]
But the correct way (getting a sql dump) is faster?
replies(1): >>45033424 #
59. nitwit005 ◴[] No.45033424{6}[source]
Had to get the web scraper working for other websites.
60. SoftTalker ◴[] No.45034446{6}[source]
You save them in a database. Probably in clear text. Six of one, half-dozen of the other.
replies(1): >>45035077 #
61. Dylan16807 ◴[] No.45035077{7}[source]
A password being put into a normal text field in a properly submitted form is a lot less likely than getting into some query or path. And a database is more likely to be handled properly than some random log file.

Six of one, .008 of a dozen of the other.

62. BlueTemplar ◴[] No.45045233{3}[source]
From your example (and many others), AI companies are engaging in DDoS too, so why wouldn't law enforcement target them too ?
replies(1): >>45059599 #
63. NegativeK ◴[] No.45059599{4}[source]
As a first and very pessimistic guess, the pages getting DoSed are maintained by people or groups with pretty minimal resources. That means time or money available for lawyers isn't there, and the monetary impact per website is small enough that LE may not care.

Also, they might share the common viewpoint of "it's the internet; suck it up."

64. NegativeK ◴[] No.45059613{4}[source]
> I think maybe the current AI scraper traffic patterns are actually what "the internet being the internet" is from here forward

Kinda my point was that it's only the internet being the internet if we tolerate it. If enough people give a crap, the corporations doing it will have to knock it off.

replies(1): >>45069626 #
65. kiitos ◴[] No.45069626{5}[source]
i appreciate the sentiment but no amount of people giving a crap will ever impact the stuff we're talking about here, because the stuff we're talking about here is in no way governed or influenced by popular opinion or anything even remotely adjacent to popular opinion

if you wanna rage against the machine then more power to you but this line of thinking is dead on arrival in terms of outcome