That's the most promising solution to AI hallucinations. If LLM output doesn't match the reality, fix the reality
I am currently working on the bug where ChatGPT expects that if a ball has been placed on a box, and the box is pushed forward, nothing happens to the ball. This one is a doozy.