←back to thread

136 points xnx | 1 comments | | HN request time: 0.418s | source
Show context
toomuchtodo ◴[] No.43708221[source]
This sounds good to take the ML/AI consumption load off Wikimedia infra?
replies(1): >>43708252 #
immibis ◴[] No.43708252[source]
The consumption load isn't the problem. You can download a complete dump of Wikipedia and even if every AI company downloaded the newest dump every time it came out, it would be a manageable server load - you know, probably double-digit terabytes per month, but that's manageable these days. And if that would be a problem, they could charge a reasonable amount to get it on a stack of BD-R discs, or heck, these companies can easily afford a leased line to Wikimedia HQ.

The problem is the non-consumptive load where they just flat-out DDoS the site for no actual reason. They should be criminally charged for that.

Late edit: Individual page loads to answer specific questions aren't a problem either. DDoS is the problem.

replies(2): >>43708581 #>>43710312 #
parpfish ◴[] No.43708581[source]
I'd assume that AI companies would use the wiki dumps for training, but there are probably tons of bots that query wiki from the web when doing some sort of websearch/function call.
replies(4): >>43708767 #>>43708888 #>>43708995 #>>43710626 #
1. jsheard ◴[] No.43708888[source]
The bots which query in response to user prompts aren't really the issue. The really obnoxious ones just crawl the entire web aimlessly looking for training data, and wikis or git repos with huge edit histories and on-demand generated diffs are a worst case scenario for that because even if a crawler only visits each page once, there's a near-infinite number of "unique" pages to visit.