"AI assistants misrepresent news content 45% of the time"
How does that compare to the number for reporters? I feel like half the time I read or hear a report on a subject I know the reporter misrepresented something.
That’s whataboutism and doesn’t address the criticism or the problem. If a reporter misrepresents a subject, intentionally or accidentally, it doesn’t make it OK for a tool to then misrepresent it further, mangling both was correct and what was incorrect.
It's not whataboutism because I'm not using it to undermine the argument. It's a legitimate question to gauge the potential impact of an AI misrepresenting news. Assessing impact is part of determining corrective action and prioritization.
It can’t be lower. LLMs work on the text they’re given. The submission isn’t saying that LLMs misrepresent half of reality, but of the news content they consume. In other words, even if news sources have errors, LLMs are adding to them.