←back to thread

255 points tbruckner | 1 comments | | HN request time: 0.232s | source
Show context
rvz ◴[] No.37420331[source]
Totally makes sense for C++ or Rust based AI models for inference instead of the over-bloated networks run on Python with sub-optimal inference and fine-tuning costs.

Minimal overhead or zero cost abstractions around deep learning libraries implemented in those languages gives some hope that people like ggerganov are not afraid of the 'don't roll your own deep learning library' dogma and now we can see the results as to why DL on the edge and local AI, is the future of efficiency in deep learning.

We'll see, but Python just can't compete on speed at all, henceforth Modular's Mojo compiler is another one that solves the problem properly with the almost 1:1 familiarity of Python.

replies(5): >>37420484 #>>37420605 #>>37420734 #>>37421354 #>>37422072 #
1. brucethemoose2 ◴[] No.37420605[source]
The actual inference is not run in Python in PyTorch, and its usually not bottlenecked by it.

The problem is CUDA, not Python.

LLMs are uniquely suited to local inference in projects like GGML because they are so RAM bandwidth heavy (and hence relatively compute lite), and relatively simple. Your kernel doesn't need to be hyper optimized by 35 Nvidia engineers in 3 stacks before its fast enough to start saturating the memory bus generating tokens.

And yet its still an issue... For instance, llama.cpp is having trouble getting prompt ingestion performance in a native implementation comparable cuBLAS, even though they theoretically have a performance advantage by using the quantization directly.