←back to thread

623 points magicalhippo | 6 comments | | HN request time: 0.077s | source | bottom
Show context
Karupan ◴[] No.42619320[source]
I feel this is bigger than the 5x series GPUs. Given the craze around AI/LLMs, this can also potentially eat into Apple’s slice of the enthusiast AI dev segment once the M4 Max/Ultra Mac minis are released. I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
replies(21): >>42619339 #>>42619433 #>>42619472 #>>42619544 #>>42619769 #>>42620175 #>>42620289 #>>42620359 #>>42620740 #>>42621569 #>>42621821 #>>42622149 #>>42622154 #>>42622259 #>>42622359 #>>42622567 #>>42622577 #>>42622621 #>>42622863 #>>42627093 #>>42627188 #
dagmx ◴[] No.42619339[source]
I think the enthusiast side of things is a negligible part of the market.

That said, enthusiasts do help drive a lot of the improvements to the tech stack so if they start using this, it’ll entrench NVIDIA even more.

replies(7): >>42619397 #>>42619404 #>>42619430 #>>42619479 #>>42619510 #>>42619885 #>>42621646 #
Karupan ◴[] No.42619510[source]
I’m not so sure it’s negligible. My anecdotal experience is that since Apple Silicon chips were found to be “ok” enough to run inference with MLX, more non-technical people in my circle have asked me how they can run LLMs on their macs.

Surely a smaller market than gamers or datacenters for sure.

replies(3): >>42619637 #>>42620854 #>>42622080 #
moralestapia ◴[] No.42622080[source]
Yes, but people already had their Macs for others reasons.

No one goes to an Apple store thinking "I'll get a laptop to do AI inference".

replies(4): >>42622296 #>>42622421 #>>42622639 #>>42623427 #
1. JohnBooty ◴[] No.42622421[source]
They have, because until now Apple Silicon was the only practical way for many to work with larger models at home because they can be configured with 64-192GB of unified memory. Even the laptops can be configured with up to 128GB of unified memory.

Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.

replies(1): >>42624005 #
2. moralestapia ◴[] No.42624005[source]
I would bet that people running LLMs on their Macs, today, is <0.1% of their user base.
replies(3): >>42625314 #>>42626764 #>>42627711 #
3. sroussey ◴[] No.42625314[source]
People buying Macs for LLMs—sure I agree.

Since the current MacOS comes built in with small LLMs, that number might be closer to 50% not 0.1%.

replies(1): >>42627383 #
4. justincormack ◴[] No.42626764[source]
Higher than that buying the top end machines though, which are very high margin
5. moralestapia ◴[] No.42627383{3}[source]
I'm not arguing whether or not Macs are capable of doing it, but whether is a material force that drives people to buy Macs because of it; it's not.
6. throwaway48476 ◴[] No.42627711[source]
All macs? Yes. But of 192GB mac configs? Probably >50%