In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.
Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.
Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.
The perception of bias in Wikipedia remains, and if LLMs can detect and correct for bias, then Grokipedia seems at least a theoretical win.
I'm happy with at least a set of links for further research on a topic of interest.
If there's a perception of bias, where is it coming from? It's clearly perception born from extreme political bias of the performers. Addressing that sort of perception by changing the content means increasing bias.
Therefore the only logical route forward to hash out incidences of perceived bias and addressing them to expose them as the bias themselves.