←back to thread

343 points LorenDB | 1 comments | | HN request time: 0s | source
Show context
tommica ◴[] No.44002018[source]
Sidetangent: why is ollama frowned upon by some people? I've never really got any other explanation than "you should run llama.CPP yourself"
replies(9): >>44002029 #>>44002150 #>>44002166 #>>44002486 #>>44002513 #>>44002621 #>>44004218 #>>44005337 #>>44006200 #
buyucu ◴[] No.44002621[source]
I abandoned Ollama because Ollama does not support Vulkan: https://news.ycombinator.com/item?id=42886680

You have to support Vulkan if you care about consumer hardware. Ollama devs clearly don't.

replies(1): >>44003156 #
ramon156[dead post] ◴[] No.44003156[source]
[flagged]
buyucu ◴[] No.44003743{3}[source]
why would I use a software that doesn't have the features I want, when a far better alternative like llama.cpp exists? ollama does not add any value.
replies(1): >>44003903 #
magicalhippo ◴[] No.44003903{4}[source]
I more often than not add multiple models to my WebUI chats to compare and contrast models.

Ollama makes this trivial compared to llama.cpp, and so for me adds a lot of value due to this.

replies(1): >>44005537 #
1. buyucu ◴[] No.44005537{5}[source]
llama-swap does it better than ollama I think.