I don't know what they're expecting others to use this data for, but if it's the same old same old LLM training data scraping, then you've got a perfectly good repository of syntactically correct, semantically coherent strings of characters and words in whatever languages Wikipedia supports. That is entirely reliable data. Whether or not it's also factually accurate doesn't matter. Language modeling doesn't require factual accuracy. That comes from some later training step if you care about that.
If you're trying to use it as a repository not of language examples but of facts, then recognize the limitations. Wikipedia itself performs no verification, no fact checking, and by design does not assure you that its content is factually accurate. Instead, it assures you that all claims of fact come along with citations of sources that meet some extremely lightweight definition of authority. Thus statements of the form "A claims X" should be viewed as statements that Wikipedia is saying are true. However, statements simply of the form "X" found on Wikipedia are not statements that Wikipedia is claiming are true.
It's up to the consumer of Wikipedia data to recognize this and do what they can with it.