←back to thread

421 points sohkamyung | 2 comments | | HN request time: 0s | source
Show context
scarmig ◴[] No.45669929[source]
If you dig into the actual report (I know, I know, how passe), you see how they get the numbers. Most of the errors are "sourcing issues": the AI assistant doesn't cite a claim, or it (shocking) cites Wikipedia instead of the BBC.

Other issues: the report doesn't even say which particular models it's querying [ETA: discovered they do list this in an appendix], aside from saying it's the consumer tier. And it leaves off Anthropic (in my experience, by far the best at this type of task), favoring Perplexity and (perplexingly) Copilot. The article also intermingles claims from the recent report and the one on research conducted a year ago, leaving out critical context that... things have changed.

This article contains significant issues.

replies(7): >>45669943 #>>45670942 #>>45671401 #>>45672311 #>>45672577 #>>45675250 #>>45679322 #
afavour ◴[] No.45669943[source]
> or it (shocking) cites Wikipedia instead of the BBC.

No... the problem is that it cites Wikipedia articles that don't exist.

> ChatGPT linked to a non-existent Wikipedia article on the “European Union Enlargement Goals for 2040”. In fact, there is no official EU policy under that name. The response hallucinates a URL but also, indirectly, an EU goal and policy.

replies(6): >>45670006 #>>45670093 #>>45670094 #>>45670184 #>>45670903 #>>45672812 #
kenjackson ◴[] No.45670093[source]
Actually there was a Wikipedia article of this name, but it was deleted in June -- because it was AI generated. Unfortunately AI falls for this much like humans do.

https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletio...

replies(4): >>45670306 #>>45670779 #>>45671331 #>>45672567 #
Workaccount2 ◴[] No.45670306[source]
This is likely because of the knowledge cutoff.

I have seen a few cases before of "hallucinations" that turned out to be things that did exist, but no longer do.

replies(1): >>45670633 #
1980phipsi ◴[] No.45670633[source]
The fix for this is for the AI to double-check all links before providing them to the user. I frequently ask ChatGPT to double check that references actually exist when it gives me them. It should be built in!
replies(4): >>45670762 #>>45670808 #>>45670935 #>>45673056 #
janwl ◴[] No.45670935[source]
I thought people here hated it when LLMs made http requests?
replies(2): >>45671214 #>>45671608 #
1. zahlman ◴[] No.45671608[source]
It's bad when they indiscriminately crawl for training, and not ideal (but understandable) to use the Internet to communicate with them (and having online accounts associated with that etc.) rather than running them locally.

It's not bad when they use the Internet at generation time to verify the output.

replies(1): >>45677156 #
2. Dylan16807 ◴[] No.45677156[source]
Also for the most part this verification can use a HEAD request.