←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 2 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.44053886[source]
The first number I look at these days is the file size via Ollama, which for this model is 14GB https://ollama.com/library/devstral/tags

I find that on my M2 Mac that number is a rough approximation to how much memory the model needs (usually plus about 10%) - which matters because I want to know how much RAM I will have left for running other applications.

Anything below 20GB tends not to interfere with the other stuff I'm running too much. This model looks promising!

replies(4): >>44054806 #>>44056502 #>>44059216 #>>44059888 #
nico ◴[] No.44056502[source]
Any agentic dev software you could recommend that runs well with local models?

I’ve been using Cursor and I’m kind of disappointed. I get better results just going back and forth between the editor and ChatGPT

I tried localforge and aider, but they are kinda slow with local models

replies(6): >>44056637 #>>44057592 #>>44058473 #>>44059316 #>>44064049 #>>44071582 #
zackify ◴[] No.44058473[source]
I used devstral today with cline and open hands. Worked great in both.

About 1 minute initial prompt processing time on an m4 max

Using LM studio because the ollama api breaks if you set the context to 128k.

replies(2): >>44060526 #>>44062026 #
1. nico ◴[] No.44062026[source]
Have you tried using mlx or Simon Wilson’s llm?

https://llm.datasette.io/en/stable/

https://simonwillison.net/tags/llm/

replies(1): >>44112112 #
2. zackify ◴[] No.44112112[source]
On lm studio I was using mlx