←back to thread

343 points LorenDB | 1 comments | | HN request time: 0s | source
Show context
tommica ◴[] No.44002018[source]
Sidetangent: why is ollama frowned upon by some people? I've never really got any other explanation than "you should run llama.CPP yourself"
replies(9): >>44002029 #>>44002150 #>>44002166 #>>44002486 #>>44002513 #>>44002621 #>>44004218 #>>44005337 #>>44006200 #
lhl ◴[] No.44002150[source]
Here's some discussion here: https://www.reddit.com/r/LocalLLaMA/comments/1jzocoo/finally...

Ollama appears to not properly credit llama.cpp: https://github.com/ollama/ollama/issues/3185 - this is a long-standing issue that hasn't been addressed.

This seems to have leaked into other projects where even when llama.cpp is being used directly, it's being credited to Ollama: https://github.com/ggml-org/llama.cpp/pull/12896

Ollama doesn't contributed to upstream (that's fine, they're not obligated to), but it's a bit weird that one of the devs claimed to have and uh, not really: https://www.reddit.com/r/LocalLLaMA/comments/1k4m3az/here_is... - that being said they seem to maintain their own fork so anyone could cherry pick stuff it they wanted to: https://github.com/ollama/ollama/commits/main/llama/llama.cp...

replies(1): >>44004513 #
1. tommica ◴[] No.44004513[source]
Thanks for the good explanation!