Most active commenters
  • machtiani-chat(3)

←back to thread

310 points skarat | 17 comments | | HN request time: 1.135s | source | bottom

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?
1. nlh ◴[] No.43962846[source]
I use Cursor as my base editor + Cline as my main agentic tool. I have not tried Windsurf so alas I can't comment here but the Cursor + Cline combo works brilliantly for me:

* Cursor's Cmk-K edit-inline feature (with Claude 3.7 as my base model there) works brilliantly for "I just need this one line/method fixed/improved"

* Cursor's tab-complete (neé SuperMaven) is great and better than any other I've used.

* Cline w/ Gemini 2.5 is absolutely the best I've tried when it comes to full agentic workflow. I throw a paragraph of idea at it and it comes up with a totally workable and working plan & implementation

Fundamentally, and this may be my issue to get over and not actually real, I like that Cline is a bring-your-own-API-key system and an open source project, because their incentives are to generate the best prompt, max out the context, and get the best results (because everyone working on it wants it to work well). Cursor's incentive is to get you the best results....within their budget (of $.05 per request for the max models and within your monthly spend/usage allotment for the others). That means they're going to try to trim context or drop things or do other clever/fancy cost saving techniques for Cursor, Inc.. That's at odds with getting the best results, even if it only provides minor friction.

replies(5): >>43963043 #>>43964148 #>>43964404 #>>43967657 #>>43982988 #
2. abhinavsharma ◴[] No.43963043[source]
Totally agree on aligning with the one with clearest incentives here
3. pj_mukh ◴[] No.43964148[source]
Clines agent work is better than Cursors own?
replies(1): >>43967024 #
4. masterjack ◴[] No.43964404[source]
I also like Cline since it being open source means that while I’m using it I can see the prompts and tools and thus learn how to build better agents.
5. shmoogy ◴[] No.43967024[source]
Cursor does something with truncating context to save costs on their end, you dont get the same with Cline because you're paying for each transaction - so depending on complexity I find Cline works significantly better.

I still use cursor chat with agent mode though, but I've always been indecisive. Like the others said though, its nice to see how cline behaves to assist with creating your own agentic workflows.

replies(1): >>43967473 #
6. nsonha ◴[] No.43967473{3}[source]
> Cursor does something with truncating context to save costs on their end

I have seen mentioning of this but is there actually a source to back it up? Tried Cline every now and then. While it's great, I don't find it better than Cursor (nor worse in any clear way)

replies(2): >>43969011 #>>43973398 #
7. machtiani-chat ◴[] No.43967657[source]
Just use codex and machtiani (mct). Both are open source. Machtiani was open sourced today. Mct can find context in a hay stack, and it’s efficient with tokens. Its embeddings are locally generated because of its hybrid indexing and localization strategy. No file chunking. No internet, if you want to be hardcore. Use any inference provider, even local. The demo video shows solving an issue VSCode codebase (of 133,000 commits and over 8000 files) with only Qwen 2.5 coder 7B. But you can use anything you want, like Claude 3.7. I never max out context in my prompts - not even close.

https://github.com/tursomari/machtiani

replies(2): >>43970275 #>>43971262 #
8. dimitri-vs ◴[] No.43969011{4}[source]
It's actually very easy to see for yourself. When the agent "looks" at a file it will say the number of lines it looks at, almost always its the top 0-250 or 0-500 but might depend on model selected and if MAX mode is utilized.
9. evnix ◴[] No.43970275[source]
How does this compare to aider?
replies(1): >>43974019 #
10. asar ◴[] No.43971262[source]
This sounds really cool. Can you explain your workflow in a bit more detail? i.e. how exactly you work with codex to implement features, fix bugs etc.
replies(1): >>43973930 #
11. nlh ◴[] No.43973398{4}[source]
Totally anecdotal of course so take this with a grain of salt, but I've seen and experienced this when Cursor chats start to get very long (eg the context starts to really fill up). It suddenly starts "forgetting" things you talked about earlier or producing code that's at odds with code it already produced. I think it's partly why they suggest but don't enforce starting a new chat when things start to really grow.
replies(2): >>43983449 #>>44003122 #
12. machtiani-chat ◴[] No.43973930{3}[source]
Say I'm chatting in a git project directory `undici`. I can show you a few ways how I work with codex.

1. Follow up with Codex.

`mct "fix bad response on h2 server" --model anthropic/claude-3.7-sonnet:thinking`

Machtiani will stream the answer, then also apply git patches suggested in the convo automatically.

Then I could follow up with codex.

`codex "See unstaged git changes. Run tests to make sure it works and fix and problems with the changes if necessary."

2. Codex and MCT together

`codex "$(mct 'fix bad response on h2 server' --model deepseek/deepseek-r1 --mode answer-only)"`

In this case codex will dutifully implement the suggested changes of codex, saving tokens and time.

The key for the second example is `--mode answer-only`. Without this flagged argument, mct will itself try and apply patches. But in this case codex will do it as mct withholds the patches with the aforementioned flagged arg.

3. Refer codex to the chat.

Say you did this

`mct "fix bad response on h2 server" --model gpt-4o-mini --mode chat`

Here, I used `--mode chat`, which tells mct to stream the answer and save the chat convo, but not to apply git changes (differrent than --mode answer-only).

You'll see mct will printout that something like

`Response saved to .machtiani/chat/fix_bad_server_resonse.md`

Now you can just tell codex.

`codex "See .machtiani/chat/fix_bad_server_resonse.md, and do this or that...."`

*Conclusion*

The example concepts should cover day-to-day use cases. There are other exciting workflows, but I should really post a video on that. You could do anything with unix philosophy!

replies(1): >>43983753 #
13. machtiani-chat ◴[] No.43974019{3}[source]
I skipped using aider, but I heard good things. I needed to work with large, complex repos, not vibe codebases. And agents require always top-notch models that are expensive and can't run locally well. So when Codex came out, it skipped to that.

But mct leverages the weak models well, do things not possible otherwise. And it does even better with stronger models. Rewards stronger models, but doesn't punish smaller models.

So basically, you can use save money and do more using mct + codex. But I hear aider is terminal tool so maybe try and mct + aider?

14. richardreeze ◴[] No.43982988[source]
How much do you (roughly, per month) pay for Gemini's API? That's my main concern with switching to "bring your own API keys" tools.
15. nsonha ◴[] No.43983449{5}[source]
I don't really have this problem of long chat that everyone seems to have. Usually I can accomplish what I need to do after less than 10 turns. If I don't, then I naturally just want to restart the conversation adding whatever discovery from last time, at that point I just accept the current state (or discard all) and create a new chat, perhaps phrase it differently. Naturally I just feel that is easier not because I encounter any regression in my task.

It helps that the task is usually self-contained, but I guess as an engineer, it's kinda in your instinct to always divide and conquer any task.

16. asar ◴[] No.43983753{4}[source]
Amazing, really excited to try this out. And thanks for the time you took to write this up!
17. DANmode ◴[] No.44003122{5}[source]
aka any deep work is getting done.