←back to thread

Tim Bray on Grokipedia

(www.tbray.org)
175 points Bogdanp | 2 comments | | HN request time: 0.531s | source
Show context
hocuspocus ◴[] No.45777495[source]
I checked a topic I care about, and that I have personally researched because the publicly available information is pretty bad.

The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?

In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.

replies(3): >>45777512 #>>45777570 #>>45779378 #
jameslk ◴[] No.45777570[source]
It was just launched? I remember when Wikipedia was pretty useless early on. The concept of using an LLM to take a ton of information and distill it down into encyclopedia form seems promising with iteration and refinement. If they add in an editor step to clean things up, that would likely help a lot (not sure if maybe they already do this)
replies(3): >>45777761 #>>45777871 #>>45779173 #
f33d5173 ◴[] No.45777871[source]
It really isn't a promising idea at all. Llms arem't "there" yet with respect to this sort of thing. Having an editor is totally infeasible, at that point you might as well have the humans write the articles.
replies(1): >>45777975 #
jameslk ◴[] No.45777975[source]
> Llms arem't "there" yet with respect to this sort of thing

Yes, nothing about this is “there yet” which was my point

> Having an editor is totally infeasible, at that point you might as well have the humans write the articles.

Why?

replies(1): >>45778439 #
1. jerf ◴[] No.45778439[source]
For the same reason you don't modify autogenerated files in your source code base. It's easy to get an LLM to just regen the page but once someone tries to edit it you're even farther down the road of what an LLM can't do right now. I wouldn't even trust it to follow one edit instruction, at scale, at that size of document, and if we're going to have humans trying to make multiple edits while the LLM is folding in its own improvements... yeah, the LLMs aren't even remotely ready for that at this point.
replies(1): >>45778476 #
2. jameslk ◴[] No.45778476[source]
That’s a good point. I think it’s a similar problem of why you wouldn’t let a model go wild in your codebase though. If good solutions to how we handle AI models making code changes are found, it seems reasonable to expect they also may be applicable here