←back to thread

310 points skarat | 1 comments | | HN request time: 0.404s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
jonwinstanley ◴[] No.43960904[source]
Has anyone had any joy using a local model? Or is it still too slow?

On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?

replies(2): >>43961507 #>>43963959 #
1. int_19h ◴[] No.43963959[source]
It's not so much that it's slow, it's that local models are still a far cry from what SOTA cloud LLM providers offer. Depending on what you're actually doing, a local model might be good enough.