←back to thread

423 points sohkamyung | 1 comments | | HN request time: 0.199s | source
Show context
falcor84 ◴[] No.45669518[source]
> 45% of all AI answers had at least one significant issue.

> 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.

> 20% contained major accuracy issues, including hallucinated details and outdated information.

I'm generally against whataboutism, but here I think we absolutely have to compare it to human-written news reports. Famously, Michael Crichton introduced the "Gell-Mann amnesia effect" [0], saying:

> Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

This has absolutely been my experience. I couldn't find proper figures, but I would put good money on significantly over 45% of articles written in human-written news articles having "at least one significant issue".

[0] https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

replies(6): >>45669594 #>>45669605 #>>45669612 #>>45669644 #>>45669939 #>>45670193 #
1. wat10000 ◴[] No.45670193[source]
That's not comparable. Reading news reports and summarizing them is about a thousand times easier than writing those news reports in the first place. If you want to see how humans fare at this task, have some people answer questions about the news and then compare their answers to the original reporting. I'm not sure if the average human would fare too well at this either, but it's completely different from the question of how accurate the original news itself is.