←back to thread

202 points Jabrov | 2 comments | | HN request time: 0.407s | source
1. jjoergensen ◴[] No.44005280[source]
I noticed this "thank you" today: "GGML

Thank you to the GGML team for the tensor library that powers Ollama’s inference – accessing GGML directly from Go has given a portable way to design custom inference graphs and tackle harder model architectures not available before in Ollama."

Source: https://ollama.com/blog/multimodal-models

replies(1): >>44006330 #
2. alkh ◴[] No.44006330[source]
Thanks for the linked article! I was looking for a local vision model to recognize my handwritten notes, and this article provided a good TLDR about doing this in Ollama.

I think Ollama can improve TLDR and add more attribution to llama.cpp to their README. I don't understand why there's no reply from ollama maintainers for so long