The problem is the non-consumptive load where they just flat-out DDoS the site for no actual reason. They should be criminally charged for that.
Late edit: Individual page loads to answer specific questions aren't a problem either. DDoS is the problem.
I was at an interview for a tier one AI lab and the pm I was taking to refused to believe that the torrent dumps from Wikipedia were fresh and usable for training.
When you spend all your time fighting bot detection measures it's hard to imagine someone willingly putting up their data out there for free.
The correct way to do this is to stand up a copy of MediaWiki on your own infra and then scrape that. That will give you shittons of HTML to parse and tokenize. If you can't work with that, then you're not qualified to do this kind of thing, sorry.
[0] If you're wondering, I was scraping Wikimedia Commons directly from their public API, from my residential IP with my e-mail address in the UA. This was primarily out of laziness, but I believe this is the way you're "supposed" to use the API.
Yes, I did try to work with Wikitext directly, and yes that is a terrible idea.
From the same set of interviews I made the point that the only way to meaningfully extract the semantics of a page meant for human consumption is to use a vision model that uses typesetting as a guide for structure.
The perfect example was the contract they sent, which looked completely fine, but was a word document with only wysiwyg formatting, e.g. headings were just extra large bold text rather than marked up as heading. If you used the programmatically extracted text as training data you'd be in trouble.