←back to thread

255 points tbruckner | 1 comments | | HN request time: 0.211s | source
Show context
superkuh[dead post] ◴[] No.37420475[source]
[flagged]
sbierwagen ◴[] No.37420490[source]
M2 Mac Studio with 192gb of ram is US$5,599 right now.
replies(3): >>37420616 #>>37420693 #>>37427799 #
superkuh[dead post] ◴[] No.37420693[source]
[flagged]
yumraj ◴[] No.37420789[source]
It’s not useless.

It seems a Thunderbolt/USB4 external NVME enclosure can do about 2500-3000 MB/s which is about half of internal SSD. So not at all bad. It’ll just add an additional few tens of seconds while loading the model. Totally manageable.

Edit: in fact this is the proper route anyway since it allows you to work with huge model and intermediate FP16/FP32 files while quantizing. Internal storage, regardless of how much, will run out quickly.

replies(1): >>37420889 #
superkuh ◴[] No.37420889[source]
>Internal storage, regardless of how much, will run out quickly.

This only applies to Macs and Mac-a-likes. Actual desktop PCs have many SATA ports and can store reasonable amounts of data without the crutch of external high latency storage making things iffy. I say this as someone with TBs of llama models on disk and I do quantization myself (sometimes).

BTW my computer cost <$900 w/17TB of storage currently and can run up to 34B 5bit llm. I could spend $250 more to upgrade to 128GB of DDR4 2666 ram and run the 65B/70B but 180B is out of the range. You do have to spend big money for that.

replies(4): >>37421057 #>>37421079 #>>37421096 #>>37422593 #
yumraj ◴[] No.37421079[source]
We’re talking about 192GB of GPU accessible memory here.

Or are you comparing with CPU inference? In which case apples-oranges.

How much do GPUs with 192GB of RAM cost?

Edit: also I think (unverified) very very few systems have multiple PCI 3/4 NVME slots. There are companies with PCI cards that can take NVMEs but that’ll in itself cost, without NVMEs, more than your $900 system.

replies(1): >>37421909 #
superkuh ◴[] No.37421909[source]
Yes, CPU inference. For llama.cpp with Apple M1/M2 the GPU inference (via metal) is about 5x faster than CPU for text generation and about the same speed for prompt processing. Not insignificant but not giant either.

You generally can't hook up large storage drives to nvme. Those are all tiny flash storage. I'm not sure why you brought it up.

replies(1): >>37422034 #
1. yumraj ◴[] No.37422034[source]
> You generally can't hook up large storage drives to nvme. Those are all tiny flash storage.

What’s your definition of large?

2TB and 4TB NVME are not tiny. You can even buy 8TB NVMEs, though those are more expensive and IMHO not worth it for this use case.

2TB NVMEs are $60-$100 right now.

You can attach several of those via Thunderbolt/USB4 enclosures providing 2500-3000 MB/s