Most active commenters
  • fragmede(4)
  • m3adow(3)

←back to thread

310 points skarat | 28 comments | | HN request time: 1.588s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
1. joelthelion ◴[] No.43959984[source]
Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.
replies(4): >>43960110 #>>43960122 #>>43960453 #>>43961416 #
2. mbanerjeepalmer ◴[] No.43960110[source]
I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.

Roo is less solid but better-integrated.

Hopefully I'll switch back soon.

replies(1): >>43960199 #
3. aitchnyu ◴[] No.43960122[source]
Yup, choose your model and pay as you go, like commodities like rice and water. The others played games with me to minimize context and use cheaper models (such as 3 modes, daily credits etc, using most expensive model etc).

Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.

replies(1): >>43960191 #
4. fragmede ◴[] No.43960191[source]
fwiw. Gemini-*, which is available in Aider, isn't Pay As You Go (payg) but post paid, which means you get a bill at the end of the month and not the OpenAI/others model of charging up credits before you can use the service.
replies(1): >>43960659 #
5. fragmede ◴[] No.43960199[source]
I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.
replies(1): >>43960384 #
6. movq ◴[] No.43960384{3}[source]
The (relative) simplicity is what sells aider for me (it also helps that I use neovim in tmux).

It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.

replies(1): >>43960675 #
7. Oreb ◴[] No.43960453[source]
Approximately how much does it cost in practice to use Aider? My understanding is that Aider itself is free, but you have to pay per token when using an API key for your LLM of choice. I can look up for myself the prices of the various LLMs, but it doesn't help much, since I have no intuition whatsoever about how many tokens I am likely to consume. The attraction of something like Zed or Cursor for me is that I just have a fixed monthly cost to worry about. I'd love to try Aider, as I suspect it suits my style of work better, but without having any idea how much it would cost me, I'm afraid of trying.
replies(3): >>43960548 #>>43960743 #>>43963403 #
8. anotheryou ◴[] No.43960548[source]
Depends entirely on the API.

With deepseek: ~nothing.

replies(1): >>43960703 #
9. camkego ◴[] No.43960659{3}[source]
I guess this is a good reason to consider things like openrouter. Turns it into a prepaid service.
10. ayewo ◴[] No.43960675{4}[source]
I like your framing about “feeling out of control”.

For the occasional frontend task, I don’t mind being out of control when using agentic tools. I guess this is the origin of Karpathy’s vibe coding moniker: you surrender to the LLM’s coding decisions.

For backend tasks, which is my bread and butter, I certainly want to know what it’s sending to the LLM so it’s just easier to use the chat interface directly.

This way I am fully in control. I can cherry pick the good bits out of whatever the LLM suggests or redo my prompt to get better suggestions.

replies(1): >>43965587 #
11. tuyguntn ◴[] No.43960703{3}[source]
is deepseek fast enough for you? For me the API is very slow, sometimes unusable
replies(1): >>43960980 #
12. m3adow ◴[] No.43960743[source]
I'm using Gemini 2.5 Pro with Aider and Cline for work. I'd say when working for 8 full hours without any meetings or other interruptions, I'd hit around $2. In practice, I average at $0.50 and hit $1 once in the last weeks.
replies(4): >>43960920 #>>43961303 #>>43961794 #>>43961905 #
13. beacon294 ◴[] No.43960920{3}[source]
This is very inexpensive. What is your workflow and savings techniques! I can spend $10/h or more with very short sessions and few files.
replies(1): >>43961252 #
14. anotheryou ◴[] No.43960980{4}[source]
To be honest I'm using windsurf with openAI/google right now and used deepseek with aider when it was still less crowded.

My only problem was deepseek occasionally not answering at all, but generally it was fast (non thinking that was).

15. m3adow ◴[] No.43961252{4}[source]
Huh, I didn't configure anything for saving, honestly. I just add the whole repo and do my stuff. How do you get to $10/h? I probably couldn't even provoke this.

I assume we have a very different workflow.

replies(1): >>43964319 #
16. bluehatbrit ◴[] No.43961303{3}[source]
I'd be really keen to know more about what you're using it for, how you typically prompt it, and how many times you're reaching for it. I've had some success at keeping spend low but can also easily spend $4 from a single prompt so I don't tend to use tools like Aider much. I'd be much more likely to use them if I knew I could reliably keep the spend down.
replies(1): >>43961796 #
17. jbellis ◴[] No.43961416[source]
I love Aider, but I got frustrated with its limitations and ended up creating Brokk to solve them: https://brokk.ai/

Compared to Aider, Brokk

- Has a GUI (I know, tough sell for Aider users but it really does help when managing complex projects)

- Builds on a real static analysis engine so its equivalent to the repomap doesn't get hopelessly confused in large codebases

- Has extremely useful git integration (view git log, right click to capture context into the workspace)

- Is also OSS and supports BYOK

I'd love to hear what you think!

replies(1): >>43970317 #
18. didgeoridoo ◴[] No.43961794{3}[source]
Wow my first venture into Claude Code (which completely failed for a minor feature addition on a tiny Swift codebase) burned $5 in about 20 minutes.

Probably related to Sonnet 3.7’s rampant ADHD and less the CLI tool itself (and maybe a bit of LLMs-suck-at-Swift?)

replies(1): >>43963060 #
19. m3adow ◴[] No.43961796{4}[source]
I'll try to elaborate:

I'm using VSC for most edits, tab-completion is done via Copilot, I don't use it that much though, as I find the prediction to be subpar or too wordy in case of commenting. I use Aider for rubber-ducking and implementing small to mid-scope changes. Normally, I add the required files, change to architect or ask mode (depends on the problem I want to solve), explain what my problem is and how I want it to be solved. If the Aider answer satisfies me, I change to coding mode and allow the changes.

No magic, I have no idea how a single prompt can generate $4. I wouldn't be surprised if I'm only scratching on the surface with my approach though, maybe there is a better but more costly strategy yielding better results which I just didn't realize yet.

20. Aeolun ◴[] No.43961905{3}[source]
Not sure how that’s possible? Do you ask it one question every hour or so?
21. liveoneggs ◴[] No.43963060{4}[source]
In my testing aider tends to spend about 1/10th the money as claude code. I assume because, in aider, you are explicit about /add and everything
22. BeetleB ◴[] No.43963403[source]
It will tell you how much each request cost you as well as a running total.

You your /tokens to see how many tokens it has in its context for the next request. You manage it by dropping files and clearing the context.

23. theonething ◴[] No.43964319{5}[source]
do you use any tool to add the whole repo?
24. fragmede ◴[] No.43965587{5}[source]
How do get you out the "good bits" without a diff/patch file? or do you ask the LLM for that and apply it manually?
replies(1): >>43966898 #
25. ayewo ◴[] No.43966898{6}[source]
Basically what antirez described about 4 days ago in this thread https://news.ycombinator.com/item?id=43929525.

So this part of my workflow is intentionally fairly labor intensive because it involves lots of copy-pasting between my IDE and the chat interface in a browser.

replies(1): >>43969635 #
26. fragmede ◴[] No.43969635{7}[source]
From the linked comment: > Mandatory reminder that "agentic coding" works way worse than just using the LLM directly

just isn't true. If everything was equal, that might possibly be true, but it turns out that system prompts are quite powerful in influencing how an LLM behaves. ChatGPT with a blank user entered system prompt behaves differently (read: poorer at coding) than one with a tuned system prompt. Aider/Copilot/Windsurf/etc all have custom system prompts that make them more powerful rather than less, compared to using a raw web browser, and also don't involve the overhead of copy pasting.

27. evnix ◴[] No.43970317[source]
Apart from the GUI, What does it improve on when compared to aider.
replies(1): >>44000287 #
28. jbellis ◴[] No.44000287{3}[source]
Short answer: static analysis

Long answer: https://brokk.ai/blog/lean-context-lightning-development