One thing I would consider is usage throttling on a MacBook Pro. Would repeated LLM usage run into throttling?
No idea what specifically everyone is pulling their performance data from or what task(s).
Here is a video to help visualize the differences with a maxed out m3 max vs 16gbm1 pro vs 4090 on llm 7B/13b/70b llama 2. https://youtu.be/jaM02mb6JFM
Here’s a Reddit comparison of 4090 vs M2 Ultra 96gb with tokens/s
https://old.reddit.com/r/LocalLLaMA/comments/14319ra/rtx_409...
M3 pro memory BW 150 gb/s
M3 max 10/30 300 gb/s
M3 max 12/40 400 gb/s
“Llama models are mostly limited by memory bandwidth.
rtx 3090 has 935.8 gb/s
rtx 4090 has 1008 gb/s
m2 ultra has 800 gb/s
m2 max has 400 gb/s
so 4090 is 10% faster for llama inference than 3090
and more than 2x faster than apple m2 max
https://github.com/turboderp/exllama
using exllama you can get 160 tokens/s in 7b model and 97 tokens/s in 13b model
while m2 max has only 40 tokens/s in 7b model and 24 tokens/s in 13b
apple 40/s
Memory bandwidth cap is also the reason why llamas work so well on cpu
(…)
buying second gpu will increase memory capacity to 48gb but has no effect on bandwidth
so 2x 4090 will have 48gb vram and 1008 gb/s bandwidth and 50% utilization”