←back to thread

Basic Facts about GPUs

(damek.github.io)
338 points ibobev | 3 comments | | HN request time: 0.626s | source
Show context
b0a04gl ◴[] No.44366418[source]
been running llama.cpp and vllm on same 4070, trying to batch more prompts for serving. llama.cpp was lagging bad once I hit batch 8 or so, even though GPU usage looked fine. vllm handled it way better.

later found vllm uses paged kv cache with layout that matches how the GPU wants to read fully coalesced without strided jumps. llama.cpp was using a flat layout that’s fine for single prompt but breaks L2 access patterns when batching.

reshaped kv tensors in llama.cpp to interleave ; made it [head, seq, dim] instead of [seq, head, dim], closer to how vllm feeds data into fused attention kernel. 2x speedup right there w.r.t same ops.

GPU was never the bottleneck. it was memory layout not aligning with SM’s expected access stride. vllm just defaults to layouts that make better use of shared memory and reduce global reads. that’s the real reason it scales better per batch.

this took its own time of say 2+days and had to dig under the nice looking GPU graphs to find real bottlenecks, it was widly trial and error tbf,

> anybody got idea on how to do this kinda experiment in hot reload mode without so much hassle??

replies(5): >>44367323 #>>44367389 #>>44367889 #>>44367899 #>>44370340 #
1. tough ◴[] No.44367889[source]
did you see yesterday nano-vllm [1] from a deepseek employee 1200LOC and faster than vanilla vllm?

1. https://github.com/GeeeekExplorer/nano-vllm

replies(1): >>44368195 #
2. Gracana ◴[] No.44368195[source]
Is it faster for large models, or are the optimizations more noticeable with small models? Seeing that the benchmark uses a 0.6B model made me wonder about that.
replies(1): >>44370093 #
3. tough ◴[] No.44370093[source]
I have not tested it but its from a deepseek employee i don't know if it's used in prod there or not!