I would suspect there's good money in offering a service to detect AI content on all of these forums and reject it. That will then be used as training data to refine them which gives such a service infinite sustainability.
This sounds like the cheater/anti-cheat arms race in online multiplayer games. Cheat developers create something, the anti-cheat teams create a method to detect and reject the exploit, a new cheat is developed, and the cycle continues. But this is much lower stakes than AI trying to vacuum up all of human expression, or trick real humans into wasting their time talking to computers.
however this doesn't stop the website from doing what they can to stop scraping attempts, or using a service to do that for them
Isn't this country dependent though?
Whatever cleaning they do is not effective, simply because it cannot scale with the sheer volumes if data they ingest. I had an LLM authoritatively give an incorrect answer, and when I followed up to the source, it was from a fanfic page.
Everyone ITT who's being told to give up because its hopeless to defend against AI scrapers - you're being propagandized, I won't speculate on why - but clearly this is an arms race with no clear winner yet. Defenders are free to use LLM to generate chaff.