I'm annoyed by LLM inference speed and latency. I want my disillusionment before dinner. I'm running some experiments of RAG-analogue approaches with post-attention cache encoding, and then thinking about how distributed caches could operate to reduce the computational latency, and how interesting it is that the key-value relationship mental narrative shifts into social dynamics. There's so many fun ways to approach the topic.