←back to thread

Tim Bray on Grokipedia

(www.tbray.org)
175 points Bogdanp | 1 comments | | HN request time: 0.203s | source
Show context
tptacek ◴[] No.45777117[source]
Why give it oxygen?
replies(6): >>45777142 #>>45777160 #>>45777311 #>>45777327 #>>45777329 #>>45777411 #
meowface ◴[] No.45777160[source]
To play devil's advocate: Grok has historically actually been one of the biggest debunkers of right-wing misinformation and conspiracy theories on Twitter, contrary to popular conception. Elon keeps trying to tweak its system prompt to make it less effective at that, but Grokipedia was worth an initial look from me out of curiosity. It took me 10 seconds to realize it was ideologically-motivated garbage and significantly more right-biased than Wikipedia is left-biased.

(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)

replies(4): >>45777225 #>>45777240 #>>45777294 #>>45777386 #
tptacek ◴[] No.45777240[source]
Wikipedia is probably in the running for one of the greatest contributions to public knowledge of the past 100 years, and that's a consequence of how it functions, warts and all. I don't care how good Grok is or isn't. I'm a fan of frontier model LLMs. They don't meaningfully replace Wikipedia.
replies(3): >>45777432 #>>45777452 #>>45778125 #
1. onetimeusename ◴[] No.45778125[source]
What percent of edits on Wikipedia do you think are done by LLMs presently? It looks like there is a guide for detecting them https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing . The way Wikipedia functions, LLMs can make edits. They can be detected, but unless you are saying they are useless I don't know what point you are making about an LLM contribution versus a human. That LLMs aren't good enough to make meaningful contributions yet?? That Grok is specifically the problem?