←back to thread

548 points nsagent | 1 comments | | HN request time: 0.001s | source
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
saagarjha ◴[] No.44567309[source]
No, it's because doing 2 would be substantially harder.
replies(2): >>44567356 #>>44567414 #
lukev ◴[] No.44567356[source]
There's a massive financial incentive (billions) to allow existing CUDA code to run on non-NVidia hardware. Not saying it's easy, but is implementation difficulty really the blocker?
replies(5): >>44567393 #>>44567539 #>>44568123 #>>44573767 #>>44574809 #
saagarjha ◴[] No.44567393[source]
Yes. See: AMD
replies(1): >>44567420 #
lukev ◴[] No.44567420[source]
AMD has never implemented the CUDA API. And not for technical reasons.
replies(1): >>44567444 #
gpm ◴[] No.44567444[source]
They did, or at least they paid someone else to.

https://www.techpowerup.com/319016/amd-develops-rocm-based-s...

replies(2): >>44568534 #>>44579527 #
1. pjmlp ◴[] No.44579527{4}[source]
Partially, the CUDA C++ API, not CUDA APIs.