←back to thread

577 points simonw | 10 comments | | HN request time: 0.849s | source | bottom
1. neutronicus ◴[] No.44723714[source]
If I understand correctly, the author is managing to run this model on a laptop with 64GB of RAM?

So a home workstation with 64GB+ of RAM could get similar results?

replies(6): >>44723736 #>>44723737 #>>44723740 #>>44723824 #>>44724925 #>>44727466 #
2. simonw ◴[] No.44723737[source]
Only if that RAM is available to a GPU, or you're willing to tolerate extremely slow responses.

The neat thing about Apple Silicon is the system RAM is available to the GPU. On most other systems you would need ~48GB of VRAM.

replies(2): >>44724890 #>>44731242 #
3. lynndotpy ◴[] No.44723736[source]
The laptop has "unified RAM", so that's like 64GB of VRAM.
4. simlevesque ◴[] No.44723740[source]
Not so sure. The MBP uses hybrid memory, the ram is shared with the cpu and gpu.

Your 64gb workstation doesn't share the ram with your gpu.

5. NitpickLawyer ◴[] No.44723824[source]
> So a home workstation with 64GB+ of RAM could get similar results?

Similar in quality, but CPU generation will be slower than what macs can do.

What you can do with MoEs (GLMs and Qwens) is to run some experts (the shared ones usually) on a GPU (even a 12GB/16GB will do) and the rest from RAM on CPU. That will speed things up considerably (especially prompt processing). If you're interested in this, look up llama.cpp and especially ik_llama, which is a fork dedicated to this kind of selective offloading of experts.

6. xrd ◴[] No.44724890[source]
Aren't there non-Macos laptops which also support sharing the VRAM and regular RAM, i.e. iGPU?

https://www.reddit.com/r/GamingLaptops/comments/1akj5aw/what...

I personally want to run linux and feel like I'll get a better price/GB offering that way. But, it is confusing to know how local models will actually work on those and the drawbacks of iGPU.

replies(1): >>44726743 #
7. ◴[] No.44724925[source]
8. mft_ ◴[] No.44726743{3}[source]
iGPUs are typically weak, and/or aren't capable of running the LLM so the CPU is used instead. You can run things this way, but it's not fast, and it gets slower as the models go up in size.

If you want things to run quickly, then aside from Macs, there's the 2025 ASUS Flow z13 which (afaik) is the only laptop with AMD's new Ryzen Max+ 395 processor. This is powerful and has up to 128Gb of RAM that can be shared with the GPU, but they're very rare (and Mac-expensive) at the moment.

The other variable for running LLMs quickly is memory bandwidth; the Max+ 395 has 256Gb/s, which is similar to the M4 Pro; the M4 Max chips are considerably higher. Apple fell on their feet on this one.

9. 0x457 ◴[] No.44727466[source]
You can run, it will just run on CPU and will be pretty slow. Macs, like everyone in this thread said, use unified memory, so it's 64GB between CPU and GPU, while for you its just 64 for CPU.
10. sagarm ◴[] No.44731242[source]
LLM evaluation on GPU and CPU is memory bandwidth constrained. The highest-end Apple machines are good for this because they have ~500GBps high memory bandwidth and up to ~128GB, not just because they can share that memory with the GPU (which any iGPU does). Most consumer machines are limited to 2xDDR5 channels (~50GBps).