←back to thread

216 points veggieroll | 3 comments | | HN request time: 0.208s | source
1. tarruda ◴[] No.41863180[source]
They didn't add a comparison to Qwen 2.5 3b, which seems to surpass Ministral 3b MMLU, HumanEval, GSM8K: https://qwen2.org/qwen2-5/#qwen25-05b15b3b-performance

These benchmarks don't really matter that much, but it is funny how this blog post conveniently forgot to compare with a model that already exists and performs better.

replies(2): >>41863218 #>>41863231 #
2. butterfly42069 ◴[] No.41863218[source]
At this point the benchmarks barely matter at all. It's entirely possible to train for a high benchmark score and reduce the overall quality of the model in the process.

Imo use the model that makes the most sense when you ask it stuff, and personally I'd go for the one with the least censorship (which imo isn't AliBaba Qwen anything)

3. DreamGen ◴[] No.41863231[source]
Also, the 3B model, which is API only (so the only thing that matters is price, quality and speed) should be compared to something like Gemini Flash 1.5 8B which is cheaper than this 3B API and also has higher benchmark performance, super long context support, etc.