←back to thread

1311 points msoad | 1 comments | | HN request time: 0.211s | source
Show context
w1nk ◴[] No.35394065[source]
Does anyone know how/why this change decreases memory consumption (and isn't a bug in the inference code)?

From my understanding of the issue, mmap'ing the file is showing that inference is only accessing a fraction of the weight data.

Doesn't the forward pass necessitate accessing all the weights and not a fraction of them?

replies(4): >>35394751 #>>35396440 #>>35396507 #>>35398499 #
1. losteric ◴[] No.35396507[source]
yeah, I believe some readers are misinterpreting the report. The OS manages mmap, it won't show up as "regular" memory utilization because it's lazy-loaded and automatically managed. If the OS can keep the whole file in memory, it will, and it will also magically swap to disk prioritizing explicit memory allocation (malloc).

Sounds like the big win is load time from the optimizations. Also, maybe llama.cpp now supports low-memory systems through mmap swapping? ... at the end of the day, 30B quantized is still 19GB...