←back to thread

684 points prettyblocks | 1 comments | | HN request time: 0.237s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
1. accrual ◴[] No.42794022[source]
Although there are better ways to test, I used a 3B model to speed up replies from my local AI server when testing out an application I was developing. Yes I could have mocked up HTTP replies etc., but in this case the small model let me just plug in and go.