←back to thread

548 points nsagent | 3 comments | | HN request time: 0.811s | source
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
quitit ◴[] No.44567355[source]
It's 1.

It means that a developer can use their relatively low-powered Apple device (with UMA) to develop for deployment on nvidia's relatively high-powered systems.

That's nice to have for a range of reasons.

replies(5): >>44568550 #>>44568740 #>>44569683 #>>44570543 #>>44571119 #
_zoltan_ ◴[] No.44568550[source]
"relatively high powered"? there's nothing faster out there.
replies(4): >>44568714 #>>44568716 #>>44568748 #>>44569262 #
1. MangoToupe ◴[] No.44568716[source]
Is this true per watt?
replies(1): >>44569017 #
2. spookie ◴[] No.44569017[source]
It doesn't matter for a lot of applications. But fair, for a big part of them it is either essential or a nice to have. But completely off the point if we are waging fastest compute no matter what.
replies(1): >>44570777 #
3. johnboiles ◴[] No.44570777[source]
...fastest compute no matter watt