←back to thread

684 points prettyblocks | 2 comments | | HN request time: 0s | source

I mean anything in the 0.5B-3B range that's available on Ollama (for example). Have you built any cool tooling that uses these models as part of your work flow?
Show context
psyklic ◴[] No.42784612[source]
JetBrains' local single-line autocomplete model is 0.1B (w/ 1536-token context, ~170 lines of code): https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...

For context, GPT-2-small is 0.124B params (w/ 1024-token context).

replies(4): >>42785009 #>>42785728 #>>42785838 #>>42786326 #
1. staticautomatic ◴[] No.42786326[source]
Is that why their tab completion is so bad now?
replies(1): >>42791707 #
2. sam_lowry_ ◴[] No.42791707[source]
Hm... I wonder what your use case it. I do the modern Enterprise Java and the tab completion is a major time saver.

While interactive AI is all about posing, meditating on the prompt, then trying to fix the outcome, IntelliJ tab completion... shows what it will complete as you type and you Tab when you are 100% OK with the completion, which surprisingly happens 90..99% of the time for me, depending on the project.