On a similar thread, how does it compare to Hippoml?
replies(1):
Its comparable to Apache TVM's vulkan in speed on cuda, see https://github.com/mlc-ai/mlc-llm
But honestly, the biggest advantage of llama.cpp for me is being able to split a model so performantly. My puny 16GB laptop can just barely, but very practically, run LLaMA 30B at almost 3 tokens/s, and do it right now. That is crazy!
Please tell me your config! I have an i9-10900 with 32GB of ram that only gets .7 tokens/s on a 30B model