←back to thread

310 points skarat | 7 comments | | HN request time: 0s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
joelthelion ◴[] No.43959984[source]
Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.
replies(4): >>43960110 #>>43960122 #>>43960453 #>>43961416 #
1. mbanerjeepalmer ◴[] No.43960110[source]
I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.

Roo is less solid but better-integrated.

Hopefully I'll switch back soon.

replies(1): >>43960199 #
2. fragmede ◴[] No.43960199[source]
I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.
replies(1): >>43960384 #
3. movq ◴[] No.43960384[source]
The (relative) simplicity is what sells aider for me (it also helps that I use neovim in tmux).

It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.

replies(1): >>43960675 #
4. ayewo ◴[] No.43960675{3}[source]
I like your framing about “feeling out of control”.

For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.

For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.

This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.

replies(1): >>43965587 #
5. fragmede ◴[] No.43965587{4}[source]
How do get you out the "good bits" without a diff/patch file? or do you ask the LLM for that and apply it manually?
replies(1): >>43966898 #
6. ayewo ◴[] No.43966898{5}[source]
Basically what antirez described about 4 days ago in this thread https://news.ycombinator.com/item?id=43929525.

So this part of my workflow is intentionally fairly labor intensive because it involves lots of copy-pasting between my IDE and the chat interface in a browser.

replies(1): >>43969635 #
7. fragmede ◴[] No.43969635{6}[source]
From the linked comment: > Mandatory reminder that "agentic coding" works way worse than just using the LLM directly

just isn't true. If everything was equal, that might possibly be true, but it turns out that system prompts are quite powerful in influencing how an LLM behaves. ChatGPT with a blank user entered system prompt behaves differently (read: poorer at coding) than one with a tuned system prompt. Aider/Copilot/Windsurf/etc all have custom system prompts that make them more powerful rather than less, compared to using a raw web browser, and also don't involve the overhead of copy pasting.