←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 7 comments | | HN request time: 0.003s | source | bottom
1. johnQdeveloper ◴[] No.44057424[source]
*For people without a 24GB RAM video card, I've got an 8GB RAM one running this model performs OK for simple tasks on ollama but you'd probably want to pay for an API for anything using a large context window that is time sensitive:*

total duration: 35.016288581s load duration: 21.790458ms prompt eval count: 1244 token(s) prompt eval duration: 1.042544115s prompt eval rate: 1193.23 tokens/s eval count: 213 token(s) eval duration: 33.94778571s eval rate: 6.27 tokens/s

total duration: 4m44.951335984s load duration: 20.528603ms prompt eval count: 1502 token(s) prompt eval duration: 773.712908ms prompt eval rate: 1941.29 tokens/s eval count: 1644 token(s) eval duration: 4m44.137923862s eval rate: 5.79 tokens/s

Compared to an API call that finishes in about 20% of the time it feels a bit slow without the recommended graphics card and what not is all I'm saying.

In terms of benchmarks, it seems unusually well tuned for the model size but I suspect its just a case of gaming the measurement by testing against it as part of the development of the model which is not bad in and of itself since I suspect every LLM who is in this space marketed to IT folks does the same thing tbh so its objective enough given that as a rough gauge of "Is this usable?" without heavy time expense testing it.

replies(1): >>44058748 #
2. throwaway314155 ◴[] No.44058748[source]
> For people without a 24GB RAM video card, I've got an 8GB RAM one running

What're you using for this? llama.cpp? Have a 12GB card (rtx 4070) i'd like to try it on.

replies(1): >>44058765 #
3. johnQdeveloper ◴[] No.44058765[source]
https://ollama.com/library/devstral

https://ollama.com/

I believe its just a HTTP wrapper and terminal wrapper around llama.cpp with some modifications/fork.

replies(1): >>44058830 #
4. throwaway314155 ◴[] No.44058830{3}[source]
Does ollama have support for cpu offloading?
replies(1): >>44059120 #
5. johnQdeveloper ◴[] No.44059120{4}[source]
> Does ollama have support for cpu offloading?

https://www.reddit.com/r/ollama/comments/1df757o/high_cost_o...

https://github.com/ollama/ollama/issues/8291

Yes.

replies(1): >>44059677 #
6. taneq ◴[] No.44059677{5}[source]
A perfect blend of LMGTFY and helpfulness. :)
replies(1): >>44059984 #
7. johnQdeveloper ◴[] No.44059984{6}[source]
lol. I try not to be a total asshole, it sometime even works! :)

Good luck to you mate with your life :)