←back to thread

344 points LorenDB | 6 comments | | HN request time: 0.94s | source | bottom
Show context
buyucu[dead post] ◴[] No.44002611[source]
[flagged]
1. hexmiles ◴[] No.44003680[source]
If I understood it correctly: this time no, it is actually new engine builded by the ollama team indipendent from llama.cpp
replies(2): >>44003746 #>>44003749 #
2. Havoc ◴[] No.44003746[source]
llama.cpp added support for vision 6 days ago.

See SimonW post here:

https://simonwillison.net/2025/May/10/llama-cpp-vision/

>If I understood it correctly

You understood it exactly like they wanted you to...

3. buyucu ◴[] No.44003749[source]
I doubt it. Llama.cpp just added support for the same models a few weeks ago. Folks at ollama just did a git pull.
replies(2): >>44004061 #>>44004120 #
4. magicalhippo ◴[] No.44004061[source]
It's open source, you could have checked. Seems indeed like the new engine cuts out llama.cpp, using GGML libary directly.

https://github.com/ollama/ollama/pull/7913

replies(1): >>44005517 #
5. ◴[] No.44004120[source]
6. buyucu ◴[] No.44005517{3}[source]
seriously? who do you think develops ggml?

hint: it's llama.cpp