←back to thread

423 points sohkamyung | 1 comments | | HN request time: 0.201s | source
Show context
giantg2 ◴[] No.45671249[source]
"AI assistants misrepresent news content 45% of the time"

How does that compare to the number for reporters? I feel like half the time I read or hear a report on a subject I know the reporter misrepresented something.

replies(1): >>45671864 #
latexr ◴[] No.45671864[source]
That’s whataboutism and doesn’t address the criticism or the problem. If a reporter misrepresents a subject, intentionally or accidentally, it doesn’t make it OK for a tool to then misrepresent it further, mangling both was correct and what was incorrect.

https://en.wikipedia.org/wiki/Whataboutism

replies(2): >>45672272 #>>45672724 #
1. giantg2 ◴[] No.45672272[source]
It's not whataboutism because I'm not using it to undermine the argument. It's a legitimate question to gauge the potential impact of an AI misrepresenting news. Assessing impact is part of determining corrective action and prioritization.