←back to thread

423 points sohkamyung | 2 comments | | HN request time: 0.534s | source
Show context
giantg2 ◴[] No.45671249[source]
"AI assistants misrepresent news content 45% of the time"

How does that compare to the number for reporters? I feel like half the time I read or hear a report on a subject I know the reporter misrepresented something.

replies(1): >>45671864 #
latexr ◴[] No.45671864[source]
That’s whataboutism and doesn’t address the criticism or the problem. If a reporter misrepresents a subject, intentionally or accidentally, it doesn’t make it OK for a tool to then misrepresent it further, mangling both was correct and what was incorrect.

https://en.wikipedia.org/wiki/Whataboutism

replies(2): >>45672272 #>>45672724 #
1. cesarvarela ◴[] No.45672724[source]
It is not OK, but if it's lower, it is an improvement.
replies(1): >>45672806 #
2. latexr ◴[] No.45672806[source]
It can’t be lower. LLMs work on the text they’re given. The submission isn’t saying that LLMs misrepresent half of reality, but of the news content they consume. In other words, even if news sources have errors, LLMs are adding to them.