←back to thread

269 points dipampaul17 | 2 comments | | HN request time: 0.403s | source

I discovered that in LLM inference, keys and values in the KV cache have very different quantization sensitivities. Keys need higher precision than values to maintain quality.

I patched llama.cpp to enable different bit-widths for keys vs. values on Apple Silicon. The results are surprising:

- K8V4 (8-bit keys, 4-bit values): 59% memory reduction with only 0.86% perplexity loss - K4V8 (4-bit keys, 8-bit values): 59% memory reduction but 6.06% perplexity loss - The configurations use the same number of bits, but K8V4 is 7× better for quality

This means you can run LLMs with 2-3× longer context on the same Mac. Memory usage scales with sequence length, so savings compound as context grows.

Implementation was straightforward: 1. Added --kvq-key and --kvq-val flags to llama.cpp 2. Applied existing quantization logic separately to K and V tensors 3. Validated with perplexity metrics across context lengths 4. Used Metal for acceleration (with -mlong-calls flag to avoid vectorization issues)

Benchmarked on an M4 MacBook Pro running TinyLlama with 8K context windows. Compatible with Metal/MPS and optimized for Apple Silicon.

GitHub: https://github.com/dipampaul17/KVSplit

1. zmmmmm ◴[] No.44010883[source]
Amazing!

Curious, what happens to performance? I assume you still pay the same performance price for longer context, even if you can now fit it in memory.

replies(1): >>44011311 #
2. fennecbutt ◴[] No.44011311[source]
I think this is true, I've found I get roughly the same iteration speed for prompt processing no matter if the cache is fp16, q8 or q4.

It doesn't make sense to me though, I haven't looked into how it works inside but I would've thought it would pack vectors and then do 4-8b simd on all of them at once, but it really seems like it's not packing em.