This article should be adjusted to say poor prompting of news content misrepresents news content 45% of the time.
Now, who is responsible for poor prompting?
Maybe the LLM models will just tighten up this part of their models and assistants and suddenly it looks solved.