←back to thread

Hermes 4

(hermes4.nousresearch.com)
202 points sibellavia | 5 comments | | HN request time: 0.212s | source
1. lern_too_spel ◴[] No.45069425[source]
The charts are utter nonsense. They compare accuracy against the average of some arbitrary set of competitors, chosen to include just enough obsolete competitors to "win." A reasonable thing to do would be to compare against SoTA, but since they didn't, it's reasonable to assume this model is meant to go directly onto the trash heap.
replies(3): >>45069769 #>>45069848 #>>45069996 #
2. whymauri ◴[] No.45069769[source]
The most direct, non-marketing, non-aesthetic summary is that this model trades off a few points on 'fundamental benchmarks' (GPQA, MATH/AIME, MMLU) in exchange for being a 'more steerable' (less refusals) scaffold for downstream tuning.

Within that framing, I think it's easier to see where and how the model fits into the larger ecosystem. But, of course, the best benchmark will always be just using the model.

3. fancyfredbot ◴[] No.45069848[source]
The charts are probably there mostly to make them feel good about themselves.I don't feel like they care very much whether you use the model. Presumably they would like you to buy their token but they don't really seem to be trying very hard to push that either.
4. jug ◴[] No.45069996[source]
The tech report compares against DeepSeek R1 671B, DeepSeek V3 671B, Qwen3 235B which have been regarded as SOTA class among ”open" models.

I think this one holds its own surprisingly well in benchmarks for using the nowadays rather, let’s say battle tested Llama 3.1 base, a testament to its quality (Llama 3.2 & 3.3 didn’t employ new bases IIRC, only being new fine tunes, hence I think the explanation to why Hermes 4 is still based on 3.1… and of course Llama 4 never happened, right guys).

However for real use, I wouldn’t bother with the 405B model? I think the age of the base is kind of showing in especially long contexts. It’s like throwing a load of compute on something that is kinda aged to begin with. You’d probably be better off with DeepSeek V3.1 or (my new favorite) GLM 4.5. The latter will perform significantly better than this with less parameters.

The 70B one seems more sensible to me, if you want (yet another) decent unaligned model to have fun with for whatever reason.

replies(1): >>45071103 #
5. BoorishBears ◴[] No.45071103[source]
You're seeming missing the release announcement does have a very ridiculous graph that their comment is right to call out:

- For refusals they broke out each model's percentage.

- For "% of Questions Correct by Category" they literally grouped an unnamed set of models, averaged out their scores, and combined them as "Other"...

That's hilariously sketchy.

It's also strange that the graph for "Questions Correct" includes creativity and writing. Those don't have correct answers, only win rates, and wouldn't really fit into the same graph.