←back to thread

Tim Bray on Grokipedia

(www.tbray.org)
175 points Bogdanp | 1 comments | | HN request time: 0.001s | source
Show context
generationP ◴[] No.45777297[source]
Wondering if the project will get better from the pushback or will just be folded like one of Elon's many ADHD experiments. In a sense, encyclopedias should be easy for LLMs: they are meant to survey and summarize well-documented material rather than contain novel insights; they are often imprecise and muddled already (look at https://en.wikipedia.org/wiki/Binary_tree and see how many conventions coexist without an explanation of their differences; it used to be worse a few years ago); the writing style is pretty much that of GPT-5. But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.

replies(5): >>45777410 #>>45777700 #>>45778169 #>>45778630 #>>45782383 #
1. rsynnott ◴[] No.45782383[source]
> But the problem type of "summarize a biased source and try to remove the bias" isn't among the ones I've seen LLMs being tested for, and this is what Elon's project lives and dies by.

And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.