←back to thread

310 points skarat | 1 comments | | HN request time: 0.207s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
welder ◴[] No.43960527[source]
Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.
replies(27): >>43960550 #>>43960616 #>>43960839 #>>43960844 #>>43960845 #>>43960859 #>>43960860 #>>43960985 #>>43961007 #>>43961090 #>>43961128 #>>43961133 #>>43961220 #>>43961271 #>>43961282 #>>43961374 #>>43961436 #>>43961559 #>>43961887 #>>43962085 #>>43962163 #>>43962520 #>>43962714 #>>43962945 #>>43963070 #>>43963102 #>>43963459 #
1. nyarlathotep_ ◴[] No.43963459[source]
I'm past the honeymoon stage for LLM autocomplete.

I just noticed CLion moved to a community license, so I re-installed it and set up Copilot integration.

It's really noisy and somehow the same binding (tab complete) for built in autocomplete "collides" with LLM suggestions (with varying latency). It's totally unusable in this state; you'll attempt to populate a single local variable or something and end up with 12 lines of unrelated code.

I've had much better success with VSCode in this area, but the complete suggestions via LLM in either are usually pretty poor; not sure if it's related to the model choice differing for auto complete or what, but it's not very useful and often distracting, although it looks cool.