Most active commenters
  • Patrick_Devine(3)
  • exe34(3)

←back to thread

182 points BUFU | 12 comments | | HN request time: 1.415s | source | bottom
1. Patrick_Devine ◴[] No.42070613[source]
This was a pretty heavy lift for us to get out which was why it took a while. In addition to writing new image processing routines, a vision encoder, and doing cross attention, we also ended up re-architecting the way the models get run by the scheduler. We'll have a technical blog post soon about all the stuff that ended up changing.
replies(4): >>42070644 #>>42071917 #>>42072723 #>>42076774 #
2. exe34 ◴[] No.42070644[source]
did you feed back into llama.cpp?

also, can it do grounding like cogvlm?

either way, great job!

replies(1): >>42070949 #
3. Patrick_Devine ◴[] No.42070949[source]
It's difficult because we actually ditched a lot of the c++ code with this change and rewrote it in golang. Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway), and the image processing routines are all written in go as well. We also bypassed clip.cpp and wrote our own routines for the image encoder/cross attention (using GGML).

The hope is to be able to get more multimodal models out soon. I'd like to see if we can get Pixtral and Qwen2.5-vl in relatively soon.

replies(2): >>42072553 #>>42074277 #
4. zozbot234 ◴[] No.42071917[source]
How long until Vulkan Compute support is merged into ollama? There is an active pull request at https://github.com/ollama/ollama/pull/5059 but it seems to be stalled with no reviews.
5. qrios ◴[] No.42072553{3}[source]
> Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway)

Is there any more specific info available about who (llama.cpp or Ollama) removed what, where? As far as I can see, the server is still part of llama.cpp.

And more generally: Is this the moment when Ollama and Llama part ways?

6. csomar ◴[] No.42072723[source]
Any info of when we will get the 11B and 90B models?
replies(1): >>42076770 #
7. exe34 ◴[] No.42074277{3}[source]
that's cool thank you! no grounding then? I don't get the impression it's actually part of llama 3.2v but I thought it's worth checking with somebody who might have the experience!
replies(1): >>42079858 #
8. jjice ◴[] No.42076770[source]
Not sure if I'm misunderstanding, but they're live: https://ollama.com/library/llama3.2-vision

Ran the 11B yesterday and it worked great.

replies(1): >>42083795 #
9. jjice ◴[] No.42076774[source]
Y'all did a fantastic job! This works great and to have it all right there inside of Ollama is a huge step for local model execution.
10. Patrick_Devine ◴[] No.42079858{4}[source]
I haven't looked at cogvlm, but if you mean doing bounding boxes w/ classification, I'd love to support models like that (like detectron2) in the future.
replies(1): >>42080375 #
11. exe34 ◴[] No.42080375{5}[source]
I'm not sure what you mean by classification, but something like it, yes:

"what are the coordinates of the bounding box for the rubber duck in the image [img]" >>> "[10,50,200,300]"

12. csomar ◴[] No.42083795{3}[source]
These are vision optimized, though? Or that doesn't make them perform less for coding tasks?