←back to thread

548 points nsagent | 1 comments | | HN request time: 0.001s | source
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
sitkack ◴[] No.44568194[source]
#2 is not a copyright violation. You can reimplement APIs.
replies(2): >>44568364 #>>44568387 #
adastra22 ◴[] No.44568387[source]
CUDA is not an API, it is a set of libraries written by NVIDIA. You'd have to reimplement those libraries, and for people to care at all you'd have to reimplement the optimizations in those libraries. That does get into various IP issues.
replies(3): >>44568506 #>>44568575 #>>44570953 #
vFunct ◴[] No.44570953[source]
So if people aren't aware, you can have AI reimplement CUDA libraries for any hardware, as well as develop new ones.

You wouldn't believe me if you didn't try it and see for yourself, so try it.

NVidia's CUDA moat is no more.

replies(1): >>44578964 #
1. adastra22 ◴[] No.44578964{3}[source]
If it is so easy, please go do so.