←back to thread

310 points skarat | 1 comments | | HN request time: 0.386s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
jonwinstanley ◴[] No.43960904[source]
Has anyone had any joy using a local model? Or is it still too slow?

On something like a M4 Macbook Pro can local models replace the connection to OpenAi/Anthropic?

replies(2): >>43961507 #>>43963959 #
1. frainfreeze ◴[] No.43961507[source]
For advanced autocomplete (not code generation, but can do that too), basic planning, looking things up instead of web search, review & summary, even one shooting smaller scripts, the 32b Q4 models proved very good for me (24gb VRAM RTX 3090). All LLM caveats still apply, of course. Note that setting up local llm in cursor is pain because they don't support local host. Ngrok or vps and reverse ssh solve that though.