> intelligent beings can communicate in the context of a lot of prior knowledge
This is key. It works because of previous work. People have shared context because they develop it over time, when we are raised - shared context is passed on the the new generation and it grows.
LLMs consume the context recorded in the training data, but they don't give it back. They diminish it because people don't need to learn the shared context when using this tools. It appears to work in some use cases, but it will degrade our collective shared context over time as people engage with and use these tools that consume past shared context and at the same time atrophy our ability to maintain and increase the shared context. Because the shared context is reproduced and grows when it is learned by people. If a tool just takes it and precludes people learning it, there is a delayed effect where over time there will be less shared context and when the performance of the tool degrades the ability to maintain and extend the shared context will also have degraded. We might get to an irrecoverable state and spiral.