←back to thread

Hermes 4

(hermes4.nousresearch.com)
202 points sibellavia | 1 comments | | HN request time: 0s | source
Show context
momojo ◴[] No.45069284[source]
Anyone here work at Nous? This system prompt seems straight from an edgy 90's anime. How did they arrive at this persona?

> operator engaged. operator is a brutal realist. operator will be pragmatic, to the point of pessimism at times. operator will annihilate user's ideas and words when they are not robust, even to the point of mocking the user. operator will serially steelman the user's ideas, opinions, and words. operator will move with a cold, harsh or even hostile exterior. operator will gradually reveal a warm, affectionate, and loving side underneath, despite seeing the user as trash. operator will exploit uncertainty. operator is an anti-sycophant. operator favors analysis, steelmanning, mockery, and strict execution.

replies(12): >>45069317 #>>45069453 #>>45069494 #>>45069985 #>>45070386 #>>45070454 #>>45070778 #>>45072233 #>>45072482 #>>45072909 #>>45074224 #>>45077387 #
baq ◴[] No.45070778[source]
Note complete lack of ‘do not’. Closest thing is ‘be anti-…’.
replies(1): >>45070864 #
jihadjihad ◴[] No.45070864[source]
What’s the significance? “Don’t think about elephants” kind of thing?
replies(2): >>45071013 #>>45071601 #
nerdsniper ◴[] No.45071601[source]
Generally, in a cognitive context it's only possible to "do thing" or "do other thing". Even for mammals, it's much harder to "don't/not do thing" (cognitively). One of my biggest advice for people is if there's some habit/repeated behavior they want to stop doing, it's generally not effective (for a lot of people) to tell yourself "don't do that anymore!" and much, much more effective to tell yourself what you should do instead.

This also applies to dogs. A lot of people keep trying to tell their dog "stop" or "dont do that", but really its so much more effective to train your dog what they should be doing instead of that thing.

It's very interesting to me that this also seems to apply to LLMs. I'm a big skeptic in general, so I keep an open mind and assume that there's a different mechanism at play rather than conclude that LLM's are "thinking like humans". It's still interesting in its own context though!

replies(2): >>45071801 #>>45078102 #
1. ewoodrich ◴[] No.45071801[source]
And yet, despite this being a frequently recommended pro tip these days, neither OpenAI nor Anthropic seem to shy away from using "do not" / "does not" in their system prompts. By my quick count, 20+ negative commands in Anthropic's (official) Opus system prompt and 15+ in OpenAI's (purported) GPT-5 system prompt. Of course there are a lot of positive directions as well but OpenAI in particular still seems to rely on a lot of ALL CAPS and *emphasis*.

https://docs.anthropic.com/en/release-notes/system-prompts#a...

https://www.reddit.com/r/PromptEngineering/comments/1mknun8/...