←back to thread

548 points nsagent | 10 comments | | HN request time: 0s | source | bottom
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
quitit ◴[] No.44567355[source]
It's 1.

It means that a developer can use their relatively low-powered Apple device (with UMA) to develop for deployment on nvidia's relatively high-powered systems.

That's nice to have for a range of reasons.

replies(5): >>44568550 #>>44568740 #>>44569683 #>>44570543 #>>44571119 #
1. _zoltan_ ◴[] No.44568550[source]
"relatively high powered"? there's nothing faster out there.
replies(4): >>44568714 #>>44568716 #>>44568748 #>>44569262 #
2. chvid ◴[] No.44568714[source]
Relative to what you can get in the cloud or on a desktop machine.
3. MangoToupe ◴[] No.44568716[source]
Is this true per watt?
replies(1): >>44569017 #
4. sgt101 ◴[] No.44568748[source]
I wonder what Apple would have to do to make metal + its processors run faster than nVidia? I guess that it's all about the interconnects really.
replies(1): >>44569316 #
5. spookie ◴[] No.44569017[source]
It doesn't matter for a lot of applications. But fair, for a big part of them it is either essential or a nice to have. But completely off the point if we are waging fastest compute no matter what.
replies(1): >>44570777 #
6. quitit ◴[] No.44569262[source]
Relative to the apple hardware, the nvidia is high powered.

I appreciate that English is your second language after your Hungarian mother-tongue. My comment reflects upon the low and high powered compute of the apple vs. nvidia hardware.

7. summarity ◴[] No.44569316[source]
Right now, for LLMs, the only limiting factor on Apple Silicon is memory bandwidth. There hasn’t been progress on this since the original M1 Ultra. And since abandoning UltraFusion, we won’t see progress here anytime soon either.
replies(3): >>44569480 #>>44569623 #>>44569854 #
8. glhaynes ◴[] No.44569623{3}[source]
Have they abandoned UltraFusion? Last I’d heard, they’d just said something like “not all generations will get an Ultra chip” around the time the M4 showed up (the first M chip lacking an Ultra variation), which makes me think the M5 or M6 is fairly likely to get an Ultra.
9. librasteve ◴[] No.44569854{3}[source]
this is like saying the only limiting factor on computers is the von neumann bottleneck
10. johnboiles ◴[] No.44570777{3}[source]
...fastest compute no matter watt