This suffers from a common pitfall of LLM's, context taint. You can see it is obviously the front page from today with slight "future" variation, the result ends up being very formulaic.
replies(11):
It amazes me that even with first-hand experience, so many people are convinced that "hallucination" exclusively describes what happens when the model generates something undesirable, and "bias" exclusively describes a tendency to generate fallacious reasoning.
These are not pitfalls. They are core features! An LLM is not sometimes biased, it is bias. An LLM does not sometimes hallucinate, it only hallucinates. An LLM is a statistical model that uses bias to hallucinate. No more, no less.