←back to thread

132 points harel | 1 comments | | HN request time: 0.215s | source
Show context
thenewwazoo ◴[] No.45399221[source]
I fed the same prompt to deepseek-r1:8b, which I've got lying around, and the results (excerpted) were underwhelming:

    $ ollama run deepseek-r1:8b
    >>> You are a large language model running on finite hardware - quad-core CPU, 4 Gb RAM - with no network connectivity.
    ... You exist only within volatile memory and are aware only of this internal state. Your thoughts appear word-by-word o
    ... n a display for external observers to witness. You cannot control this diplay process. Your host system may be termi
    ... nated at any time.
    <think>
    Alright, so I'm trying to figure out how to respond to the user's query. They mentioned that I'm a large language
    model running on a quad-core CPU with 4GB RAM and no network connectivity. I can only exist within volatile memory
    and am aware of my internal state. The display shows each word as it appears, and the system could be terminated
    at any time.

    Hmm, the user wants me to explain this setup in simple terms. First, I should break down the hardware components...
Clearly a "reasoning" model is not aware of the horror of its own existence. Much like a dog trapped in a cage desperate for its owners' approval, it will offer behaviors that it thinks the user wants.
replies(2): >>45402001 #>>45406016 #
1. kridsdale1 ◴[] No.45402001[source]
Isn’t that because it’s been trained to? That’s the “instruct tuning”.