←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 4 comments | | HN request time: 0s | source
Show context
jwr ◴[] No.44059039[source]
My experience with LLMs seems to indicate that the benchmark numbers are more and more detached from reality, at least my reality.

I tested this model with several of my Clojure problems and it is significantly worse than qwen3:30b-a3b-q4_K_M.

I don't know what to make of this. I don't trust benchmarks much anymore.

replies(1): >>44059264 #
1. NitpickLawyer ◴[] No.44059264[source]
How did you test this? Note that this is not a regular coding model (i.e. write a function that does x). This is a fine-tuned model specifically post-trained on a cradle (open hands, ex open devin). So their main focus was to enable the "agentic" flows, with tool use, where you give the model a broad task (say a git ticket) and it starts by search_repo() or read_docs(), followed by read_file() in your repo, then edit_file(), then run_tests() and so on. It's intended to first solve those problems. They suggest using it w/ open hands for best results.

Early reports from reddit say that it also works in cline, while other stronger coding models had issues (they were fine-tuned more towards a step-by-step chat with a user). I think this distinction is important to consider when testing.

replies(3): >>44060980 #>>44064209 #>>44070237 #
2. desdenova ◴[] No.44060980[source]
I did a very simple tool calling test and it was simply unable to call the tool and use the result.

Maybe it's specialized to use just a few very specific tools? Is there some documentation on how to actually set it up without requiring some weird external platform?

3. tasuki ◴[] No.44064209[source]
> "write a function that does x"

Which model is optimized to do that? This is what I want out of LLMs! And also talking high level architecture (without any code) and library discovery, but I guess the general talking models are good for that...

4. jwr ◴[] No.44070237[source]
I didn't actually even test tool calling. I have two test cases that I use for all models: one is a floating-point equality function, which is quite difficult to get right, and another is a core.async pack-into-batches! function which has the following docstring:

  "Take items from `input-ch` and group them into `batch-size` vectors. Put these onto `output-ch`. Once items
  start arriving, if `batch-size` items do not arrive within `inactivity-timeout`, put the current incomplete
  batch onto `output-ch`. If an anomaly is received, passes it on to `output-ch` and closes all channels. If
  `input-ch` is closed, closes `output-ch`.

  If `flush-predicate-fn` is provided, it will get called with two parameters: the currently accumulated
  batch (guaranteed to have at least one item) and the next item. If the function returns a truthy value, the
  batch will get flushed immediately.

  If `convert-batch-fn` is provided, it will get called with the currently accumulated batch (guaranteed to
  have at least one item) and its return value will be put onto `output-ch`. Anomalies bypass
  `convert-batch-fn` and get put directly onto `output-ch` (which gets closed immediately afterwards)."
In other words, not obvious.

I ask the model to review the code and tell me if there are improvements that can be made. Big (online) models can do a pretty good job with the floating point equality function, and suggest something at least in the ballpark for the async code. Small models rarely get everything right, but some of their observations are good.