←back to thread

310 points skarat | 1 comments | | HN request time: 0.208s | source

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
nlh ◴[] No.43962846[source]
I use Cursor as my base editor + Cline as my main agentic tool. I have not tried Windsurf so alas I can't comment here but the Cursor + Cline combo works brilliantly for me:

* Cursor's Cmk-K edit-inline feature (with Claude 3.7 as my base model there) works brilliantly for "I just need this one line/method fixed/improved"

* Cursor's tab-complete (neé SuperMaven) is great and better than any other I've used.

* Cline w/ Gemini 2.5 is absolutely the best I've tried when it comes to full agentic workflow. I throw a paragraph of idea at it and it comes up with a totally workable and working plan & implementation

Fundamentally, and this may be my issue to get over and not actually real, I like that Cline is a bring-your-own-API-key system and an open source project, because their incentives are to generate the best prompt, max out the context, and get the best results (because everyone working on it wants it to work well). Cursor's incentive is to get you the best results....within their budget (of $.05 per request for the max models and within your monthly spend/usage allotment for the others). That means they're going to try to trim context or drop things or do other clever/fancy cost saving techniques for Cursor, Inc.. That's at odds with getting the best results, even if it only provides minor friction.

replies(5): >>43963043 #>>43964148 #>>43964404 #>>43967657 #>>43982988 #
machtiani-chat ◴[] No.43967657[source]
Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

replies(2): >>43970275 #>>43971262 #
evnix ◴[] No.43970275[source]
How does this compare to aider?
replies(1): >>43974019 #
1. machtiani-chat ◴[] No.43974019[source]
I skipped using aider, but I heard good things. I needed to work with large, complex repos, not vibe codebases. And agents require always top-notch models that are expensive and can't run locally well. So when Codex came out, it skipped to that.

But mct leverages the weak models well, do things not possible otherwise. And it does even better with stronger models. Rewards stronger models, but doesn't punish smaller models.

So basically, you can use save money and do more using mct + codex. But I hear aider is terminal tool so maybe try and mct + aider?