←back to thread

137 points BUFU | 6 comments | | HN request time: 0.213s | source | bottom
1. Patrick_Devine ◴[] No.42070613[source]
This was a pretty heavy lift for us to get out which was why it took a while. In addition to writing new image processing routines, a vision encoder, and doing cross attention, we also ended up re-architecting the way the models get run by the scheduler. We'll have a technical blog post soon about all the stuff that ended up changing.
replies(3): >>42070644 #>>42071917 #>>42072723 #
2. exe34 ◴[] No.42070644[source]
did you feed back into llama.cpp?

also, can it do grounding like cogvlm?

either way, great job!

replies(1): >>42070949 #
3. Patrick_Devine ◴[] No.42070949[source]
It's difficult because we actually ditched a lot of the c++ code with this change and rewrote it in golang. Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway), and the image processing routines are all written in go as well. We also bypassed clip.cpp and wrote our own routines for the image encoder/cross attention (using GGML).

The hope is to be able to get more multimodal models out soon. I'd like to see if we can get Pixtral and Qwen2.5-vl in relatively soon.

replies(1): >>42072553 #
4. zozbot234 ◴[] No.42071917[source]
How long until Vulkan Compute support is merged into ollama? There is an active pull request at https://github.com/ollama/ollama/pull/5059 but it seems to be stalled with no reviews.
5. qrios ◴[] No.42072553{3}[source]
> Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway)

Is there any more specific info available about who (llama.cpp or Ollama) removed what, where? As far as I can see, the server is still part of llama.cpp.

And more generally: Is this the moment when Ollama and Llama part ways?

6. csomar ◴[] No.42072723[source]
Any info of when we will get the 11B and 90B models?