←back to thread

684 points prettyblocks | 1 comments | | HN request time: 0.212s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
Show context
psyklic ◴[] No.42784612[source]
JetBrains' local single-line autocomplete model is 0.1B (w/ 1536-token context, ~170 lines of code): https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...

For context, GPT-2-small is 0.124B params (w/ 1024-token context).

replies(4): >>42785009 #>>42785728 #>>42785838 #>>42786326 #
pseudosavant ◴[] No.42785838[source]
I wonder how big that model is in RAM/disk. I use LLMs for FFMPEG all the time, and I was thinking about training a model on just the FFMPEG CLI arguments. If it was small enough, it could be a package for FFMPEG. e.g. `ffmpeg llm "Convert this MP4 into the latest royalty-free codecs in an MKV."`
replies(4): >>42785929 #>>42786381 #>>42786629 #>>42787136 #
1. binary132 ◴[] No.42787136[source]
That’s a great idea, but I feel like it might be hard to get it to be correct enough