I suspect a lot of techies operate with a subconscious good-faith assumption: "That can't be how X works, nobody would ever built it that way, that would be insecure and naive and error-prone, surely those bajillions of dollars went into a much better architecture."
Alas, when it comes to day's the AI craze, the answer is typically: "Nope, the situation really is that dumb."
__________
P.S.: I would also like to emphasize that even if we somehow color-coded or delineated all text based on origin, that's nowhere close to securing the system. An attacker doesn't need to type $EVIL themselves, they just need to trick the generator into mentioning $EVIL.
Your best case scenario is reducing risk by some % but you could also make it less reliable or even open up new attack vectors.
Security issues like these need deterministic solutions, and that's exceedingly difficult (if not impossible) with LLMs.
Even just the first step on the list is a doozy: The LLM has no authorial ego to separate itself from the human user, everything is just The Document. Any entities we perceive are human cognitive illusions, the same way that the "people" we "see" inside a dice-rolled mad-libs story don't really exist.
That's not even beginning to get into things like "I am not You" or "I have goals, You have goals" or "goals can conflict" or "I'm just quoting what You said, saying these words doesn't mean I believe them", etc.
This is not SQL.
There is no generally safe way of escaping LLM input, all you can do is pray, cajole, threaten or hope.