What problem do they want to solve?
From a security perspective, the real problem seems to me that LLMs cannot distinguish between instructions and data; I don't see how this proposal even attempts to address this, but then I haven't really understood their problem description (if there was one).