JetBrains' local single-line autocomplete model is 0.1B (w/ 1536-token context, ~170 lines of code): https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...
For context, GPT-2-small is 0.124B params (w/ 1024-token context).
replies(4):
For context, GPT-2-small is 0.124B params (w/ 1024-token context).
For that short of a run, you'll spend more time waiting for the node to come up, downloading the dataset, and compiling the model, though.