Sidetangent: why is ollama frowned upon by some people? I've never really got any other explanation than "you should run llama.CPP yourself"
replies(9):
It'd be like if handbrake tried to pretend that they implemented all the video processing work, when it's dependent on libffmpeg for all of that.
Was.
This submission is literally about them moving away from being just a wrapper around llama.cpp :)
ggml != llama.cpp, but llama.cpp and Ollama are both using ggml as a library.
“Some of the development is currently happening in the llama.cpp and whisper.cpp repos” --https://github.com/ggml-org/ggml