←back to thread

My Impressions of the MacBook Pro M4

(michael.stapelberg.ch)
241 points secure | 1 comments | | HN request time: 0s | source
Show context
__mharrison__ ◴[] No.45775330[source]
Incredible hardware. Love that I can also run local llms on mine. https://github.com/Aider-AI/aider/issues/4526
replies(3): >>45775520 #>>45775670 #>>45775821 #
ericmcer ◴[] No.45775821[source]
Can't you run small LLMs on like... a Macbook air M1? Some models are under 1B weights, they will be almost useless but I imagine you could run them on anything from the last 10 years.

But yeah if you wanna run 600B+ weights models your gonna need an insane setup to run it locally.

replies(2): >>45777812 #>>45779635 #
1. zero_bias ◴[] No.45779635{3}[source]
I run qwen models on MBA M4 16 Gb and MBP M2 Max 32 Gb, MBA is able to handle models in accordance with its vram memory capacity (with external cooling), e.g. qwen3 embedding 8B (not 1B!) but inference is 4x-6x times slower than on mbp. I suspect weaker SoC

Anyway, Apple SoC in M series is a huge leverage thanks to shared memory: VRAM size == RAM size so if you buy M chip with 128+ Gb memory, you’re pretty much able to run SOTA models locally, and price is significantly lower than AI GPU cards