So then people will just learn the language of the LLM, e.g. if a particular LLM always interprets "set my alarm for 8" as setting your alarm for 8am people will learn to just say that if they wanted 8am or specify pm (or use a 24 hour clock) if they want 8pm.
I can see this having odd effects with natural language. Natural language users are forever in a state of negotiation with each other. If you say something to someone and they don't understand they can ask for clarification (or, more likely, just look confused) but, equally, you can take that feedback and adjust your own language model. This happens all day, every day. If most people understand you but a few don't, it's on the few to adjust their models, but if more misunderstand than understand then it's on you to adjust yours.
With current LLMs it's one way. Only you, the human, are malleable. Of course, theoretically the LLM could continuously incorporate input into its model, but we're a long way off that being practical as far as I know.
We'll have to see how it pans out but I can it either ending up in a weird feedback loop where people just capitulate and use the language of the LLM, or they continue to use human language with humans and a special LLM language with LLMs. Both options seem pretty bad.