←back to thread

439 points david927 | 1 comments | | HN request time: 0.21s | source

What are you working on? Any new ideas which you're thinking about?
1. Honga ◴[] No.44426931[source]
I'm annoyed by LLM inference speed and latency. I want my disillusionment before dinner. I'm running some experiments of RAG-analogue approaches with post-attention cache encoding, and then thinking about how distributed caches could operate to reduce the computational latency, and how interesting it is that the key-value relationship mental narrative shifts into social dynamics. There's so many fun ways to approach the topic.