←back to thread

684 points prettyblocks | 2 comments | | HN request time: 0.426s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
Show context
behohippy ◴[] No.42785105[source]
I have a mini PC with an n100 CPU connected to a small 7" monitor sitting on my desk, under the regular PC. I have llama 3b (q4) generating endless stories in different genres and styles. It's fun to glance over at it and read whatever it's in the middle of making. I gave llama.cpp one CPU core and it generates slow enough to just read at a normal pace, and the CPU fans don't go nuts. Totally not productive or really useful but I like it.
replies(6): >>42785192 #>>42785253 #>>42785325 #>>42786081 #>>42786114 #>>42787856 #
1. ipython ◴[] No.42786114[source]
That's neat. I just tried something similar:

    FORTUNE=$(fortune) && echo $FORTUNE && echo "Convert the following output of the Unix `fortune` command into a small screenplay in the style of Shakespeare: \n\n $FORTUNE" | ollama run phi4
replies(1): >>42790266 #
2. watermelon0 ◴[] No.42790266[source]
Doesn't `fortune` inside double quotes execute the command in bash? You should use single quotes instead of backticks.