←back to thread

Tim Bray on Grokipedia

(www.tbray.org)
175 points Bogdanp | 9 comments | | HN request time: 0.787s | source | bottom
Show context
hocuspocus ◴[] No.45777495[source]
I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.

The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?

In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.

replies(3): >>45777512 #>>45777570 #>>45779378 #
jameslk ◴[] No.45777570[source]
It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)
replies(3): >>45777761 #>>45777871 #>>45779173 #
1. 9dev ◴[] No.45777761[source]
Nothing about that seems promising! The one single thing you want from an Encyclopedia is compressing factual information into high-density overviews. You need to be able to trust the article to be faithful to its sources. Wikipedia mods are super anal about that, and for good reason! Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources. On Wikipedia, at least there’s lots of people checking on each other. There are no such guardrails for an LLM. You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.

If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.

replies(2): >>45777963 #>>45778746 #
2. jameslk ◴[] No.45777963[source]
> Why on earth would we want a technology that’s as good at summarisation as it is at hallucinations to write encyclopaedia entries?? You can never trust it to be faithful with the sources.

Isn’t summarization precisely one of the biggest values people are getting from AI models?

What prevents one from mitigating hallucination problems with editors as I mentioned? Are there not other ways you can think of this might be mitigated?

> You would need to trust a single publisher with a technology that’s allowing them to crank out millions of entries and updates permanently, so fast that you could never detect subtle changes or errors or biases targeted in a specific way—and that doesn’t even account for most people, who never even bother to question an article, let alone check the sources.

How is this different from Wikipedia already? It seems that if the frequency of additions/changes is really a problem, you can slow this down. Wikipedia doesn’t just automatically let every edit take place without bots and humans reviewing changes

replies(3): >>45778547 #>>45779684 #>>45782345 #
3. madeofpalk ◴[] No.45778547[source]
It’s just a different class of problem.

Human editors making mistakes is more tractable than an LLM making a literally random guess (what’s the temperature for these articles?) at what to include?

replies(1): >>45778664 #
4. jameslk ◴[] No.45778664{3}[source]
I recall a similar argument made about why encyclopedias written by paid academics and experts were better than some randos editing Wikipedia. They’re probably still right about that but Wikipedia won for reasons beyond purely being another encyclopedia. And it didn’t turn out too bad as an encyclopedia either
replies(1): >>45780659 #
5. mixedump ◴[] No.45778746[source]
> If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.

It’s painful to watch how many people (a critical mass) don’t understand this — and how dangerous it is. When you combine that potential, if not likely, outcome with the fact that people are trained or manipulated into an “us vs. them” way of thinking, any sensible discussion point that lies somewhere in between, or any perspective that isn’t “I’m cheering for my own team no matter what,” gets absorbed into that same destructive thought process and style of discourse.

In the end, this leads nowhere — which is extremely dangerous. It creates nothing but “useful idiot”–style implicit compliance, hidden behind a self-perceived sense of “deep thinking” or “seeing the truth that the idiots on the other side just don’t get.” That mindset is the perfect mechanism — one that feeds the perfect enemy: the human ego — to make followers obey and keep following “leaders” who are merely pushing their own interests and agendas, even as people inflict damage on themselves.

This dynamic ties into other psychological mechanisms beyond the ego trap (e.g., the sunk cost fallacy), easily keeping people stuck indefinitely on the same self-destructive path — endangering societies and the future itself.

Maybe, eventually, humanity will figure out how to deal with this — with the overwhelming information overload, the rise of efficient bots, and other powerful, scalable manipulation tools now available to both good and bad actors across governments and the private sector. We are built for survival — but that doesn’t make the situation any less concerning.

6. LexiMax ◴[] No.45779684[source]
> Isn’t summarization precisely one of the biggest values people are getting from AI models?

If I want an AI summary of a Wikipedia article, I can just ask an AI and cut out the middle-man.

Not only that, once I've asked the AI to do so, I can do things like ask follow-up questions or ask it to expand on a particular detail. That's something you can't do with the copy-pasted output of an AI.

replies(1): >>45784278 #
7. xg15 ◴[] No.45780659{4}[source]
Yeah, but that act of "winning" was only possible because Wikipedia raised its own standard by a lot and reined in the randos - by insisting on citing reliable sources, no original research, setting up a whole system of moderators and governance to determine what even counts as a "reliable source" etc.
8. rsynnott ◴[] No.45782345[source]
> Isn’t summarization precisely one of the biggest values people are getting from AI models?

I would say more that it’s one of the biggest illusory values they think they are getting. An incorrect summary is worse than useless, and LLMs are very bad at ‘summarising’.

9. jameslk ◴[] No.45784278{3}[source]
The good news is that you don’t have to use it. I see ways this idea can be improved, some of which I already mentioned in this thread. It just launched recently so judging solely by what it is today is missing the point