←back to thread

548 points nsagent | 1 comments | | HN request time: 0s | source
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
saagarjha ◴[] No.44567309[source]
No, it's because doing 2 would be substantially harder.
replies(2): >>44567356 #>>44567414 #
lukev ◴[] No.44567356[source]
There's a massive financial incentive (billions) to allow existing CUDA code to run on non-NVidia hardware. Not saying it's easy, but is implementation difficulty really the blocker?
replies(5): >>44567393 #>>44567539 #>>44568123 #>>44573767 #>>44574809 #
fooker ◴[] No.44568123[source]
Existing high performance cuda code is almost all first party libraries, written by NVIDIA and uses weird internal flags and inline ptx.

You can get 90% of the way there with a small team of compiler devs. The rest 10% would take hundreds of people working ten years. The cost of this is suspiciously close to the billions in financial incentive you mentioned, funny how efficient markets work.

replies(2): >>44568168 #>>44568589 #
1. pjmlp ◴[] No.44568589[source]
And the tooling, people keep forgeting about CUDA tooling.