←back to thread

548 points nsagent | 1 comments | | HN request time: 0.211s | source
Show context
MuffinFlavored ◴[] No.44566235[source]
Is this for Mac's with NVIDIA cards in them or Apple Metal/Apple Silicon speaking CUDA?... I can't really tell.

Edit: looks like it's "write once, use everywhere". Write MLX, run it on Linux CUDA, and Apple Silicon/Metal.

replies(4): >>44566248 #>>44566337 #>>44566338 #>>44566984 #
dkga ◴[] No.44566338[source]
This is the only strategy humble me can see working for CUDA in MLX
replies(1): >>44567408 #
1. whatever1 ◴[] No.44567408[source]
This is the right answer. Local models will be accelerated by Apple private cloud.