←back to thread

548 points nsagent | 2 comments | | HN request time: 0.422s | source
Show context
lukev ◴[] No.44567263[source]
So to make sure I understand, this would mean:

1. Programs built against MLX -> Can take advantage of CUDA-enabled chips

but not:

2. CUDA programs -> Can now run on Apple Silicon.

Because the #2 would be a copyright violation (specifically with respect to NVidia's famous moat).

Is this correct?

replies(9): >>44567309 #>>44567350 #>>44567355 #>>44567600 #>>44567699 #>>44568060 #>>44568194 #>>44570427 #>>44577999 #
tekawade ◴[] No.44567699[source]
I want #3 be able to connect NVIDIA GPU with Apple Silicon and run CUDA. Take advantage of apple silicon + unified memory + GPU + CUDA with PyTorch, JAX or TensorFlow.

Haven’t really explored MLX so can’t speak about it.

replies(1): >>44578022 #
1. musicale ◴[] No.44578022[source]
Nvidia already has unified memory on Grace Blackwell etc.

I guess M5 Blackwell could be better, but there are business and technical barriers to making that happen.

replies(1): >>44600346 #
2. tekawade ◴[] No.44600346[source]
I meant Apple ecosystem does not support NVIDIA GPUs anymore. NVIDIA itself with CUDA Support DMA.