Most active commenters
  • moralestapia(4)

←back to thread

623 points magicalhippo | 13 comments | | HN request time: 0.475s | source | bottom
Show context
Karupan ◴[] No.42619320[source]
I feel this is bigger than the 5x series GPUs. Given the craze around AI/LLMs, this can also potentially eat into Apple’s slice of the enthusiast AI dev segment once the M4 Max/Ultra Mac minis are released. I sure wished I held some Nvidia stocks, they seem to be doing everything right in the last few years!
replies(21): >>42619339 #>>42619433 #>>42619472 #>>42619544 #>>42619769 #>>42620175 #>>42620289 #>>42620359 #>>42620740 #>>42621569 #>>42621821 #>>42622149 #>>42622154 #>>42622259 #>>42622359 #>>42622567 #>>42622577 #>>42622621 #>>42622863 #>>42627093 #>>42627188 #
dagmx ◴[] No.42619339[source]
I think the enthusiast side of things is a negligible part of the market.

That said, enthusiasts do help drive a lot of the improvements to the tech stack so if they start using this, it’ll entrench NVIDIA even more.

replies(7): >>42619397 #>>42619404 #>>42619430 #>>42619479 #>>42619510 #>>42619885 #>>42621646 #
Karupan ◴[] No.42619510[source]
I’m not so sure it’s negligible. My anecdotal experience is that since Apple Silicon chips were found to be “ok” enough to run inference with MLX, more non-technical people in my circle have asked me how they can run LLMs on their macs.

Surely a smaller market than gamers or datacenters for sure.

replies(3): >>42619637 #>>42620854 #>>42622080 #
1. moralestapia ◴[] No.42622080[source]
Yes, but people already had their Macs for others reasons.

No one goes to an Apple store thinking "I'll get a laptop to do AI inference".

replies(4): >>42622296 #>>42622421 #>>42622639 #>>42623427 #
2. the_other ◴[] No.42622296[source]
I'm currently wondering how likely it is I'll get into deeper LLM usage, and therefore how much Apple Silicon I need (because I'm addicted to macOS). So I'm some way closer to your steel man than you'd expect. But I'm probably a niche within a niche.
3. JohnBooty ◴[] No.42622421[source]
They have, because until now Apple Silicon was the only practical way for many to work with larger models at home because they can be configured with 64-192GB of unified memory. Even the laptops can be configured with up to 128GB of unified memory.

Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.

replies(1): >>42624005 #
4. kelsey98765431 ◴[] No.42622639[source]
my $5k m3 max 128gb disagrees
replies(1): >>42623970 #
5. com2kid ◴[] No.42623427[source]
Tons of people do, my next machine will likely be a Mac for 60% this reason and 40% Windows being so user hostile now.
6. moralestapia ◴[] No.42623970[source]
Doubt it, a year ago useful local LLMs on a Mac (via something like ollama) was barely taking off.

If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.

replies(2): >>42625331 #>>42628423 #
7. moralestapia ◴[] No.42624005[source]
I would bet that people running LLMs on their Macs, today, is <0.1% of their user base.
replies(3): >>42625314 #>>42626764 #>>42627711 #
8. sroussey ◴[] No.42625314{3}[source]
People buying Macs for LLMs—sure I agree.

Since the current MacOS comes built in with small LLMs, that number might be closer to 50% not 0.1%.

replies(1): >>42627383 #
9. sroussey ◴[] No.42625331{3}[source]
No, I got a MacBook Pro 14”with M2 Max and 64GB for LLMs, and that was two generations back.
10. justincormack ◴[] No.42626764{3}[source]
Higher than that buying the top end machines though, which are very high margin
11. moralestapia ◴[] No.42627383{4}[source]
I'm not arguing whether or not Macs are capable of doing it, but whether is a material force that drives people to buy Macs because of it; it's not.
12. throwaway48476 ◴[] No.42627711{3}[source]
All macs? Yes. But of 192GB mac configs? Probably >50%
13. kgwgk ◴[] No.42628423{3}[source]
People were running llama.cpp on Mac laptops in March 2023 and Llama2 was released in July 2023. People were buying Macs to run LLMs months before M3 machines became available in November 2023.