Surely a smaller market than gamers or datacenters for sure.
It’s purely an ecosystem play imho. It benefits the kind of people who will go on to make potentially cool things and will stay loyal.
100%
The people who prototype on a 3k workstation will also be the people who decide how to architect for a 3k GPU buildout for model training.
I have a bit of an interest in games too.
If I could get one platform for both, I could justify 2k maybe a bit more.
I can't justify that for just one half: running games on Mac, right now via Linux: no thanks.
And on the PC side, nvidia consumer cards only go to 24gb which is a bit limiting for LLMs, while being very expensive - I only play games every few months.
It will be massive for research labs. Most academics have to jump through a lot of hoops to get to play with not just CUDA, but also GPUDirect/RDMA/Infiniband etc. If you get older/donated hardware, you may have a large cluster but not newer features.
No one goes to an Apple store thinking "I'll get a laptop to do AI inference".
Performance is not amazing (roughly 4060 level, I think?) but in many ways it was the only game in town unless you were willing and able to build a multi-3090/4090 rig.
Maybe (LP)CAMM2 memory will make model usage just cheap enough that I can have a hosting server for it and do my usual midrange gaming GPU thing before then.
If what you say it's true you were among the first 100 people on the planet who were doing this; which btw, further supports my argument on how extremely rare is that use case for Mac users.