←back to thread

172 points marban | 1 comments | | HN request time: 0.252s | source
Show context
InTheArena ◴[] No.40051885[source]
While everyone has focused on Apple's power-efficiency on the M series chips, one thing that has been very interesting is how powerful the unified memory model (by having the memory on-package with CPU) with large bandwidth to the memory actually is. Hence a lot of people in the local LLMA community are really going after high-memory Macs.

It's great to see NPUs here with the new Ryzen cores - but I wonder how effective they will be with off-die memory versus the Apple approach.

That said, it's nothing but great to see these capabilities in something other then a expensive NVIDIA card. Local NPUs may really help with edge deploying more conferencing capabilities.

Edited - sorry, ,meant on-package.

replies(8): >>40051950 #>>40052032 #>>40052167 #>>40052857 #>>40053126 #>>40054064 #>>40054570 #>>40054743 #
chaostheory ◴[] No.40052032[source]
What Apple has is theoretically great on paper, but it fails to live up to expectations. Whats the point of having the RAM for running an LLM locally when the performance is abysmal compared to running it on even a consumer Nvidia GPU. It’s a missed opportunity that I hope either the M4 or M5 addresses
replies(8): >>40052327 #>>40052344 #>>40052929 #>>40053695 #>>40053835 #>>40054577 #>>40054855 #>>40056153 #
InTheArena ◴[] No.40052327[source]
The performance of oolama on my M1 MAX is pretty solid - and does things that my 2070 GPU can't do because of memory.
replies(1): >>40052675 #
dangus ◴[] No.40052675[source]
Not that I don’t believe you but the 2070 is two generations and 5 years old. Maybe a comparison to a 4000 series would be more appropriate?
replies(2): >>40052731 #>>40052773 #
Teever ◴[] No.40052773[source]
Well, you know that it would still be able to do more than a 4000 series GPU from Nvidia because you can have more system memory in a mac than you can have video ram in a 4000 series GPU.
replies(1): >>40052862 #
1. dangus ◴[] No.40052862[source]
Yes, obviously I’m aware that you can throw more RAM at an M-series GPU.

But of course that’s only helpful for specific workflows.