←back to thread

Tim Bray on Grokipedia

(www.tbray.org)
175 points Bogdanp | 2 comments | | HN request time: 0.592s | source
Show context
tptacek ◴[] No.45777117[source]
Why give it oxygen?
replies(6): >>45777142 #>>45777160 #>>45777311 #>>45777327 #>>45777329 #>>45777411 #
bebb ◴[] No.45777411[source]
Because it's a genuinely good idea, and hopefully one for which the execution will be improved upon over time.

In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.

Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.

replies(2): >>45777827 #>>45777850 #
epistasis ◴[] No.45777850[source]
I don't why an LLM would be better in theory. The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.

Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.

replies(1): >>45778289 #
smitty1e ◴[] No.45778289[source]
> The Wikipedia process is created to manage bias. LLMs are created to repeat the input data, and will therefore be quite biased towards the training data.

The perception of bias in Wikipedia remains, and if LLMs can detect and correct for bias, then Grokipedia seems at least a theoretical win.

I'm happy with at least a set of links for further research on a topic of interest.

replies(4): >>45778503 #>>45778528 #>>45781138 #>>45782587 #
1. apical_dendrite ◴[] No.45778503[source]
Is there some objective standard for what is biased? For many people (including Elon Musk) biased just means something that they disagree with.

When grok says something factual that Elon doesn't like, he puts his thumb on the scale and changes how grok responds (see the whole South African white 'genocide' business). So why should we trust that an LLM will objectively detect bias, when the people in charge of training that LLM prefer that it regurgitate their preferred story, rather than what is objectively true?

replies(1): >>45778526 #
2. dragonwriter ◴[] No.45778526[source]
> Is there some objective standard for what is biased?

Generally, no.

With a limited domain of verifiable facts, you could perhaps measure a degree of deviation from fact across different questions, though how you get a distance measure for not just one question but that meaningfully aggregates across multiple is slippery without getting into subjective areas. Constructing a measure of directionality would be even harder to do objectively, too.