←back to thread

310 points skarat | 6 comments | | HN request time: 1.011s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
Show context
nlh ◴[] No.43962846[source]
I use Cursor as my base editor + Cline as my main agentic tool. I have not tried Windsurf so alas I can't comment here but the Cursor + Cline combo works brilliantly for me:

* Cursor's Cmk-K edit-inline feature (with Claude 3.7 as my base model there) works brilliantly for "I just need this one line/method fixed/improved"

* Cursor's tab-complete (neé SuperMaven) is great and better than any other I've used.

* Cline w/ Gemini 2.5 is absolutely the best I've tried when it comes to full agentic workflow. I throw a paragraph of idea at it and it comes up with a totally workable and working plan & implementation

Fundamentally, and this may be my issue to get over and not actually real, I like that Cline is a bring-your-own-API-key system and an open source project, because their incentives are to generate the best prompt, max out the context, and get the best results (because everyone working on it wants it to work well). Cursor's incentive is to get you the best results....within their budget (of $.05 per request for the max models and within your monthly spend/usage allotment for the others). That means they're going to try to trim context or drop things or do other clever/fancy cost saving techniques for Cursor, Inc.. That's at odds with getting the best results, even if it only provides minor friction.

replies(5): >>43963043 #>>43964148 #>>43964404 #>>43967657 #>>43982988 #
1. machtiani-chat ◴[] No.43967657[source]
Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

replies(2): >>43970275 #>>43971262 #
2. evnix ◴[] No.43970275[source]
How does this compare to aider?
replies(1): >>43974019 #
3. asar ◴[] No.43971262[source]
This sounds really cool. Can you explain your workflow in a bit more detail? i.e. how exactly you work with codex to implement features, fix bugs etc.
replies(1): >>43973930 #
4. machtiani-chat ◴[] No.43973930[source]
Say I'm chatting in a git project directory `undici`. I can show you a few ways how I work with codex.

1. Follow up with Codex.

`mct "fix bad response on h2 server" --model anthropic/claude-3.7-sonnet:thinking`

Machtiani will stream the answer, then also apply git patches suggested in the convo automatically.

Then I could follow up with codex.

`codex "See unstaged git changes. Run tests to make sure it works and fix and problems with the changes if necessary."

2. Codex and MCT together

`codex "$(mct 'fix bad response on h2 server' --model deepseek/deepseek-r1 --mode answer-only)"`

In this case codex will dutifully implement the suggested changes of codex, saving tokens and time.

The key for the second example is `--mode answer-only`. Without this flagged argument, mct will itself try and apply patches. But in this case codex will do it as mct withholds the patches with the aforementioned flagged arg.

3. Refer codex to the chat.

Say you did this

`mct "fix bad response on h2 server" --model gpt-4o-mini --mode chat`

Here, I used `--mode chat`, which tells mct to stream the answer and save the chat convo, but not to apply git changes (differrent than --mode answer-only).

You'll see mct will printout that something like

`Response saved to .machtiani/chat/fix_bad_server_resonse.md`

Now you can just tell codex.

`codex "See .machtiani/chat/fix_bad_server_resonse.md, and do this or that...."`

*Conclusion*

The example concepts should cover day-to-day use cases. There are other exciting workflows, but I should really post a video on that. You could do anything with unix philosophy!

replies(1): >>43983753 #
5. machtiani-chat ◴[] No.43974019[source]
I skipped using aider, but I heard good things. I needed to work with large, complex repos, not vibe codebases. And agents require always top-notch models that are expensive and can't run locally well. So when Codex came out, it skipped to that.

But mct leverages the weak models well, do things not possible otherwise. And it does even better with stronger models. Rewards stronger models, but doesn't punish smaller models.

So basically, you can use save money and do more using mct + codex. But I hear aider is terminal tool so maybe try and mct + aider?

6. asar ◴[] No.43983753{3}[source]
Amazing, really excited to try this out. And thanks for the time you took to write this up!