I feel like we need to be able to authenticated user prompts during a chat/work session. One of the things that I've worked on in the past involved CheriBSD, which have the mechanisms of deriving access for users from a single root pointer called capability. I wonder if a similar logic can be applied to user prompts during an AI agent work session: the agent only accept prompt with a certain key that is given in the first ever prompt during the start of the session, or keys after that which can proofed to be "derived"(I don't know how that would work) from the original key. This way, the risk of prompt inject should be reduced significantly.