←back to thread

423 points sohkamyung | 1 comments | | HN request time: 0.001s | source
Show context
falcor84 ◴[] No.45669518[source]
> 45% of all AI answers had at least one significant issue.

> 31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.

> 20% contained major accuracy issues, including hallucinated details and outdated information.

I'm generally against whataboutism, but here I think we absolutely have to compare it to human-written news reports. Famously, Michael Crichton introduced the "Gell-Mann amnesia effect" [0], saying:

> Briefly stated, the Gell-Mann Amnesia effect works as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

This has absolutely been my experience. I couldn't find proper figures, but I would put good money on significantly over 45% of articles written in human-written news articles having "at least one significant issue".

[0] https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

replies(6): >>45669594 #>>45669605 #>>45669612 #>>45669644 #>>45669939 #>>45670193 #
1. intended ◴[] No.45669939[source]
Yes, I absolutely see the case for the faster, cheaper, more efficient solution at making random content.

Why stop at what humans can do? AND to not be fettered by any expectations of accuracy, or even feasibility of retractions.

Truly, efficiency unbound.