The use-case (note: I'm not arguing this is a
good reason) is to allow the AI agent that reads the support tickets to fix them as well.
The problem of course is that, just as you say, you need a security boundary: the moment there's user-provided data that gets inserted into the conversation with an LLM you basically need to restrict the agent strictly to act with the same permissions as you would be willing to give the entity that submitted the user-provided data in the first place, because we have no good way of preventing the prompt injection.
I think that is where the disconnect (still stupid) comes in:
They treated the support tickets as inert data coming from a trusted system (the database), instead of treating it as the user-submitted data it is.
Storing data without making clear whether the data is potentially still tainted, and then treating the data as if it has been sanitised because you've disconnected the "obvious" unsafe source of the data from the application that processes it next is still a common security problem.