Most active commenters
  • extr(9)
  • mark_l_watson(5)
  • AJ007(4)
  • resonious(4)
  • bn-l(3)
  • e1g(3)
  • james_marks(3)
  • apwell23(3)
  • satvikpendem(3)
  • ec109685(3)

←back to thread

776 points rcchen | 237 comments | | HN request time: 1.966s | source | bottom
1. extr ◴[] No.44537358[source]
IMO other than the Microsoft IP issue, I think the biggest thing that has shifted since this acquisition was first in the works is Claude Code has absolutely exploded. Forking an IDE and all the expense that comes with that feels like a waste of effort, considering the number of free/open source CLI agentic tools that are out there.

Let's review the current state of things:

- Terminal CLI agents are several orders of magnitude less $$$ to develop than forking an entire IDE.

- CC is dead simple to onboard (use whatever IDE you're using now, with a simple extension for some UX improvements).

- Anthropic is free to aggressively undercut their own API margins (and middlemen like Cursor) in exchange for more predictable subscription revenue + training data access.

What does Cursor/Windsurf offer over VS Code + CC?

- Tab completion model (Cursor's remaining moat)

- Some UI niceties like "add selection to chat", and etc.

Personally I think this is a harbinger of where things are going. Cursor was fastest to $900M ARR and IMO will be fastest back down again.

replies(39): >>44537388 #>>44537433 #>>44537440 #>>44537454 #>>44537465 #>>44537526 #>>44537594 #>>44537613 #>>44537619 #>>44537711 #>>44537749 #>>44537830 #>>44537848 #>>44537853 #>>44537964 #>>44538026 #>>44538053 #>>44538066 #>>44538259 #>>44538272 #>>44538316 #>>44538366 #>>44538384 #>>44538404 #>>44538553 #>>44538681 #>>44538894 #>>44538939 #>>44539043 #>>44539254 #>>44539528 #>>44540250 #>>44540304 #>>44540339 #>>44540409 #>>44541020 #>>44541176 #>>44541551 #>>44541786 #
2. alanmoraes ◴[] No.44537388[source]
I never understood why those tools need to fork Visual Studio Code. Wouldn't an extension suffice?
replies(6): >>44537400 #>>44537478 #>>44538025 #>>44538642 #>>44539623 #>>44540158 #
3. extr ◴[] No.44537400[source]
IIRC problem is that VS Code does not allow extensions to create custom UI in the panels areas except for WebViews(?). It makes for not a great experience. Plus Cursor does a lot with background indexing to make their tab completion model really good - more than would be possible with the extensions APIs available.
4. libraryofbabel ◴[] No.44537433[source]
Some excellent points. On “add selection to chat”, I just want to add that the Claude Code VS code extension automatically passes the current selection to the model. :)

I am genuinely curious if any Cursor or Windsurf users who have also tried Claude Code could speak to why they prefer the IDE-fork tools? I’ve only ever used Claude Code myself - what am I missing?

replies(4): >>44537473 #>>44537492 #>>44537559 #>>44537819 #
5. wagwang ◴[] No.44537440[source]
As far as I can tell, terminal agents are inferior to hosted agents in sandboxed/imaged environments when it comes to concurrent execution and far inferior to assisted ide in terms of UX so what exactly is the point?. The "UI niceties" is the whole point of using cursor and somehow, everyone else sucks at it.
replies(2): >>44537472 #>>44537475 #
6. adamoshadjivas ◴[] No.44537454[source]
Agreed on everything. Just to add, not only anthropic is offering CC at like a 500% loss, they restricted sonnet/opus 4 access to windsurf, and jacked up their enterprise deal to Cursor. The increase in price was so big that it forced cursor to make that disastrous downgrade to their plans.

I think only way Cursor and other UX wrappers still win is if on device models or at least open source models catch up in the next 2 years. Then i can see a big push for UX if models are truly a commodity. But as long as claude is much better then yes they hold all the cards. (And don't have a bigger company to have a civil war with like openai)

replies(7): >>44537599 #>>44537888 #>>44537928 #>>44540530 #>>44541463 #>>44541798 #>>44541868 #
7. nikcub ◴[] No.44537465[source]
Cursor see it coming - it's why they're moving to the web and mobile[0]

The bigger issue is the advantage Anthropic, Google and OpenAI have in developing and deploying their own models. It wasn't that long ago that Cursor was reading 50 lines of code at a time to save on token costs. Anthropic just came out and yolo'd the context window because they could afford to, and it blew everything else away.

Cursor could release a cli tomorrow but it wouldn't help them compete when Anthropic and Google can always be multiples cheaper

[0] https://cursor.com/blog/agent-web

replies(2): >>44537553 #>>44537947 #
8. rhodysurf ◴[] No.44537472[source]
You’re missing the point tho. The point of the cli agent is that it’s a building block to put this thing everywhere. Look at CCs GitHub plugin, it’s great
replies(1): >>44537572 #
9. rhodysurf ◴[] No.44537473[source]
It already does this btw, when you use Cc from the vscode terminal and select things it adds it to cc context automatically
replies(1): >>44538073 #
10. extr ◴[] No.44537475[source]
Not sure what you mean. "Hosted agents in sandboxed/imaged environments"? The entire selling point of CC is that you can do

- > curl -fsSL http://claude.ai/install.sh | bash

- > claude

- > OAuth to your Anthropic account

Done. Now you have a SOTA agentic AI with pretty forgiving usage limits up and running immediately. This is why it's capturing developer mindshare. The simplicity of getting up and going with it is a selling point.

replies(1): >>44537804 #
11. efitz ◴[] No.44537478[source]
Cline and Roo Code (my favorite Cline fork) are fantastic and run as normal VS Code extensions.

Occasionally they lose their connection to the terminal in VSCode, but I’ve got no other integration complaints.

And I really prefer the bring-your-own-key model as opposed to letting the IDE be my middleman.

replies(1): >>44539424 #
12. extr ◴[] No.44537492[source]
Cursor's tab completion model is legitimately fantastic and for many people is worth the entire $20 subscription. Lint fixes or syntax-level refactors are guessed and executed instantly with TAB with close to 100% accuracy. This is their final moat IMO, if Copilot manages to bring their tab completion up to near parity, very little reason to use Cursor.
replies(5): >>44537718 #>>44537724 #>>44539087 #>>44539640 #>>44539888 #
13. ripberge ◴[] No.44537526[source]
Forking an IDE is not expensive if it's the core product of a company with a $900M ARR.

I doubt MS has ever made $900M off of VS Code.

replies(1): >>44537536 #
14. extr ◴[] No.44537536[source]
"The same editor you already use for free, but with slightly nicer UI for some AI stuff" is not a $900M ARR product.
replies(2): >>44537742 #>>44538284 #
15. extr ◴[] No.44537553[source]
I think this is an interesting and cool direction for Cursor to be going in and I don't doubt something like this is the future. But I have my doubts whether it will save them in the short/medium term:

- AI is not good enough yet to abandon the traditional IDE experierence if you're doing anything non-trivial. Hard finding use cases for this right now.

- There's no moat here. There are already a dozen "Claude Code UI" OSS projects with similar basic functionality.

replies(1): >>44537574 #
16. druskacik ◴[] No.44537559[source]
I'd like to ask the opposite question: why do people prefer command line tools? I tried both and I prefer working in IDE. The main reason is that I don't trust the LLMs too much and I like to see and potentially quickly edit the changes they make. With an IDE, I can iterate much faster than with the command line tool.

I haven't tried Claude Code VS Code extension. Did anyone replaced Cursor with this setup?

replies(3): >>44537629 #>>44539225 #>>44539892 #
17. wagwang ◴[] No.44537572{3}[source]
CC on github just looks like Codex. I see your point, but it seems like all the big players basically have a CLI agent and most of them think that its just an implementation detail so they dont expose it.
18. madeofpalk ◴[] No.44537574{3}[source]
I have a whole backlog of trivial tasks I never get around to because I’m working on less trivial things.
19. shados ◴[] No.44537594[source]
CC would explode even further if they had official Team/Enterprise plan (likely in the work, Claude Code Waffle flag), and worked on Windows without WSL (supposedly pretty easy to fix, they just didn't bother). Cursor learnt the % of Windows user was really high when they started looking, even before they really supported it.

They're likely artificially holding it back either because its a loss leader they want to use a very specific way, or because they're planning the next big boom/launch (maybe with a new model to build hype?).

replies(2): >>44538658 #>>44541569 #
20. virgildotcodes ◴[] No.44537599[source]
Seems like the survival strategy for cursor would be to develop their own frontier coding model. Maybe they can leverage the data from their still somewhat significant lead in the space to make a solid effort.
replies(3): >>44538168 #>>44538318 #>>44540321 #
21. bredren ◴[] No.44537613[source]
> with a simple extension for some UX improvements

What are the UX improvements?

I was using the Pycharm plugin and didn’t notice any actual integration.

I had problems with pycharm’s terminal—not least of which was default 5k line scroll back which while easy to change was worst part of CC for me at first.

I finally jumped to using iterm and then using pycharm separately to do code review, visual git workflows, some run config etc.

But the actual value of Pycharm—-and I’ve been a real booster of that IDE has shrank due to CC and moving out of the built in terminal is a threat to usage of the product for me.

If the plugin offered some big value I might stick with it but I’m not sure what they could even do.

replies(1): >>44537765 #
22. asdev ◴[] No.44537619[source]
for those who seldom use the terminal, is Claude Code still usable? I heard it doesn't do tab autocomplete in IDE like Cursor
replies(2): >>44537654 #>>44538717 #
23. rapind ◴[] No.44537629{3}[source]
You're looking at (coloured) diffs in your shell is all when it comes to coding. It's pretty easy to setup MCP and have claude be the director. Like I have zen MCP running with an OpenRouter API key, and will ask claude to consult with pro (gemini) or o3, or both to come up with an architecture review / plan.

I honestly don't know how great that is, because it just reiterates what I was planning anyways, and I can't tell if it's just glazing, or it's just drawing the same general conclusions. Seriously though, it does a decent job, and you can discuss / ruminate over approaches.

I assume you can do all the same things in an editor. I'm just comfortable with a shell is all, and as a hardcore Vi user, I don't really want to use Visual Studio.

replies(2): >>44539185 #>>44539889 #
24. virgildotcodes ◴[] No.44537654[source]
Claude Code is totally different paradigm. You don't edit your files directly so there is no tab autocomplete. It's a chat session.

There are IDE integrations where you can run it in a terminal session while perusing the files through your IDE, but it's not powering any autocomplete there AFAIK.

replies(1): >>44537738 #
25. satvikpendem ◴[] No.44537711[source]
> What does Cursor/Windsurf offer over VS Code + CC?

Cursor's @Docs is still unparalleled and no MCP server for documentation fetching even comes close. That is the only reason why I still use Cursor, sometimes I have esoteric packages that must be used in my code and other IDEs will simply hallucinate due to not having such a robust docs feature, if any, which is useless to me, and I believe Claude Code also falls into that bucket.

replies(3): >>44537915 #>>44538724 #>>44540186 #
26. conradkay ◴[] No.44537718{3}[source]
<https://forum.cursor.com/t/i-made-59-699-lines-of-agent-edit...>

It's quite interesting how little the Cursor power users use tab. Majority of the posts are some insane number of agent edits and close to (or exactly) 0 tabs.

replies(1): >>44538265 #
27. olejorgenb ◴[] No.44537724{3}[source]
Idk. When you're doing something it really gets it's super nice, but it's also off a lot of times and it's IMO super distracting when it constantly pop up. No way to explicitly request it instead - other than toggling, which seems to also turn off context/edit tracking, because after toggling on it does not suggest anything until you make some edits.

While Zed's model is not as good the UI is so much better IMO.

28. asdev ◴[] No.44537738{3}[source]
are people viewing file diffs in the terminal? surely people aren't just vibing code changes in
replies(10): >>44537809 #>>44537826 #>>44537933 #>>44538081 #>>44538221 #>>44538222 #>>44538276 #>>44538344 #>>44538544 #>>44540408 #
29. conradkay ◴[] No.44537742{3}[source]
$900m in revenue is easy if you're selling a dollar for <$1. Feels like that's what cursor's $20/m "unlimited" plan is
30. zackify ◴[] No.44537749[source]
Almost all of this was true before they even announced the purchase. I was so shocked and now I’m not surprised it fell through
31. extr ◴[] No.44537765[source]
#1 improvement for VS Code users is giving the agent MCP tools to get diagnostics from the editor LSPs. Saves a tremendous amount of time having the agent run and rerun linting commands.
replies(1): >>44537974 #
32. gk1 ◴[] No.44537804{3}[source]
Plus it’s straightforward to make Claude Code run agents in parallel/background just like Codex and Cursor, in local sandboxes: https://github.com/dagger/container-use
33. asib ◴[] No.44537809{4}[source]
Yes or running claude code in the cursor/vscode terminal and watching the files change and then reviewing in IDE. I often like to be able to see an entire file when reviewing a diff, rather than just the lines that changed. Plus it's nice to have go-to-definition when reviewing.
34. sunnybeetroot ◴[] No.44537819[source]
I can roll back to different checkpoints with Cursor easily. Maybe CC has it but the fact that I haven’t found it after using it daily is an example of Cursor having a better UX for me.
replies(1): >>44538941 #
35. didibus ◴[] No.44537826{4}[source]
Yes, it shows you the file diff. But generally, the workflow is that you git commit a checkpoint, then let it make all the changes it wants freely, then in your IDE, review what has changed since previous commit, iterate the prompts/make your own adjustments to the code, and when you like it, git commit.
36. bionhoward ◴[] No.44537830[source]
does claude code have a privacy mode with zero data retention?
replies(1): >>44538330 #
37. davidclark ◴[] No.44537848[source]
Is this $900M ARR a reliable number?

Their base is $20/mth. That would equal 3.75M people paying a sub to Cursor.

If literally everyone is on their $200/mth plan, then that would be 375K paid users.

There’s 50M VS Code + VS users (May 2025). [1] 7% of all VS Code users having switched to Cursor does not match my personal circle of developers. 0.7% . . . Maybe? But, that would be if everyone using Cursor were paying $200/month.

Seems impossibly high, especially given the number of other AI subscription options as well.

[1] https://devblogs.microsoft.com/blog/celebrating-50-million-d...

replies(4): >>44537899 #>>44537986 #>>44538201 #>>44540386 #
38. lunarcave ◴[] No.44537853[source]
Strictly speaking about large, complex, sprawling codebases, I don't think you can beat the experience that an IDE + coding agent brings with a terminal-based coding agent.

Auto-regressive nature of these things mean that errors accumulate, and IDEs are well placed to give that observability to the human, than a coding agent. I can course correct more easily in an IDE with clear diffs, coding navigation, than following a terminal timeline.

replies(3): >>44537902 #>>44538403 #>>44540170 #
39. teruakohatu ◴[] No.44537888[source]
> CC at like a 500% loss

Do you have a citation for this?

It might be at a loss, but I don’t think it is that extravagant.

replies(3): >>44537924 #>>44539073 #>>44539146 #
40. ashraymalhotra ◴[] No.44537899[source]
Maybe the OP got confused with Cursor's $900mil raise? https://cursor.com/blog/series-c

Last disclosed revenue from Cursor was $500mil. https://www.bloomberg.com/news/articles/2025-06-05/anysphere...

replies(2): >>44537909 #>>44538726 #
41. teruakohatu ◴[] No.44537902[source]
> I don't think you can beat the experience that an IDE + coding agent brings with a terminal-based coding agent.

CC has some integration with VSC it is not all or nothing.

replies(1): >>44540019 #
42. extr ◴[] No.44537909{3}[source]
Yeah that’s probably it!
replies(1): >>44540490 #
43. robryan ◴[] No.44537915[source]
Claude code can get pretty far simply calling `go doc` on packages.
44. resonious ◴[] No.44537924{3}[source]
I'm also curious about this. Claude Code feels very expensive to me, but at the same time I don't have much perspective (nothing to compare it to, really, other than Codex or other agent editors I guess. And CC is way better so likely worth the extra money anyway)
replies(1): >>44538007 #
45. Aeolun ◴[] No.44537928[source]
It probably doesn’t cost them all that much? Maybe they were offering the API at a 500% markup, and code is just breaking even.
46. golergka ◴[] No.44537933{4}[source]
that's what lazygit in another terminal tab is for
47. Aeolun ◴[] No.44537947[source]
> Anthropic just came out and yolo'd the context window because they could afford to

I don’t think this is true at all. The reason CC is so good is that they’re very deliberate about what goes in the context. CC often spends ages reading 5 LOC snippets, but afterwards it only has relevant stuff in context.

replies(3): >>44538012 #>>44538701 #>>44538937 #
48. xnx ◴[] No.44537964[source]
Is the case for using Claude Code much weaker now that Gemini CLI is out?
replies(1): >>44538038 #
49. mh- ◴[] No.44537974{3}[source]
This is a great point. Now I'm wondering if there's a way to get LSPs going with the terminal/TUI interface.
replies(1): >>44538031 #
50. smcleod ◴[] No.44537986[source]
The $20/month cursor sub is heavily limited though, for basic casual usage that's fine but you VERY soon run into its limits when working at any speed.
51. harikb ◴[] No.44538007{4}[source]
I think GP is talking about Claude Code Max 100 & 200 plans. They are very reasonable compared to anything else that has per-use token usage.

I am on Max and I can work 5 hrs+ a day easily. It does fall back to Sonnet pretty fast, but I don't seem to notice any big differece.

replies(2): >>44538048 #>>44538683 #
52. nsonha ◴[] No.44538012{3}[source]
Heard a lot of this context bs parroted all over HN, don't buy it. If simply increasing context size can solve problem, Gemini would be the best model for everything.
replies(1): >>44538135 #
53. lozenge ◴[] No.44538025[source]
When the Copilot extension needs a new VS Code feature it gets added, but it isn't available to third party extensions until months later... Err, years later... well, whenever Microsoft feels like it.

So an extension will never be able to compete with Copilot.

replies(1): >>44538774 #
54. apwell23 ◴[] No.44538026[source]
thats not a fair comparision CC is

agentic tool + anthropic subsidized pricing.

Second part is why it has "exploded"

55. nsonha ◴[] No.44538031{4}[source]
opencode has that
56. apwell23 ◴[] No.44538038[source]
no. CC is not just a cli. Its cli + their pro/max plan.

gemini cli is very expensive.

replies(3): >>44538130 #>>44538418 #>>44542079 #
57. e1g ◴[] No.44538048{5}[source]
Yes, my CC usage is regularly $50-$100 per day, so their Max plan is absolutely great value that I don’t expect to last.
replies(4): >>44538359 #>>44538423 #>>44538898 #>>44542316 #
58. baby ◴[] No.44538053[source]
I've tried all the CLI and vscode with agent mode (and personally I prefer o4-mini) is the best thing out there.
59. socalgal2 ◴[] No.44538066[source]
just curious because I'm inexperienced with all the latest tools here

> - Tab completion model (Cursor's remaining moat)

What is that? I have Gemini Code Assist installed in VSCode and I'm getting tab completion. (yes, LLM based tab completion)

Which, as an aside I find useful when it works but also often extremely confusing to read. Like say in C++ I type

    int myVar = 123
The editor might show

    int myVar = 123;
And it's nearly impossible to tell that I didn't enter that `;` so I move on to the next line instead of pressing tab only to find the `;` wasn't really there. That's also probably an easy example. Literally it feels like 1 of 6 lines I type I can't tell what is actually in the file and what is being suggested. Any tips? Maybe I just need to set some special background color for text being suggested.

and PS: that tiny example is not an example of a great tab completion. A better one is when I start editing 1 of 10 similar lines, I edit the first one, it sees the pattern and auto does the other 9. Can also do the "type a comment and it fills in the code" thing. Just trying to be clear I'm getting LLM tab completion and not using Cursor

replies(2): >>44538320 #>>44538711 #
60. greymalik ◴[] No.44538073{3}[source]
As does Copilot
61. martinald ◴[] No.44538081{4}[source]
Depending on what I'm doing with it I have 3 modes:

Trivial/easy stuff - let it make a PR at the end and review in GitHub. It rarely gets this stuff wrong IME or does anything stupid.

Moderately complex stuff - let it code away, review/test it in my IDE and make any changes myself and tell claude what I've changed (and get it to do a quick review of my code)

Complex stuff - watch it like a hawk as it is thinking and interrupt it constantly asking questions/telling it what to do, then review in my IDE.

62. SamDc73 ◴[] No.44538130{3}[source]
They do have a subscription: it's $22/month, but the whole pricing and instructions is very confusing, it took me 15 min to figure it all out.
63. SamDc73 ◴[] No.44538135{4}[source]
Gemini tends to be better at bug hunting, but yes everything else Claude is still superior
64. libraryofbabel ◴[] No.44538168{3}[source]
I don’t think that’s a viable strategy. It is very very hard and not many people can do it. Just look at how much Meta is paying to poach the few people in the world capable of training a next gen frontier model.
replies(1): >>44538310 #
65. helloericsf ◴[] No.44538201[source]
The base plan limit is not hard to hit. Then you're on the usage based rocket.
66. evan_ ◴[] No.44538221{4}[source]
If there’s a conflict you just back out your change and do it again.
67. tptacek ◴[] No.44538222{4}[source]
I review and modify changes in Zed or Emacs.
68. anonymid ◴[] No.44538259[source]
I never got the valuation. I (and many others) have built open source agent plugins that are pretty much just as good, in our free time (check out magenta nvim btw, I think it turned out neat!)
69. Jcampuzano2 ◴[] No.44538265{4}[source]
At my company we have an enterprise subscription and we're also all allowed to see the analytics for the entire company. Last I checked, I was literally the number one user of Tab and middle of the pack for agent.

It's interesting when I see videos or reddit posts about cursor and people getting rate limited and being super angry. In my experience tab is the number one feature, and I feel like most people using agent are probably overusing it tasks that would honestly take less time to do myself or using models way smarter than they need to be for the task at hand.

70. HenriNext ◴[] No.44538272[source]
- Forking VSCode is very easy; you can do it in 1 hour.

- Anthropic doesn't use the inputs for training.

- Cursor doesn't have $900M ARR. That was the raise. Their ARR is ~$500m [1].

- Claude Code already support the niceties, including "add selection to chat", accessing IDE's realtime warnings and errors (built-in tool 'ideDiagnostics'), and using IDE's native diff viewer for reviewing the edits.

[1] https://techcrunch.com/2025/06/05/cursors-anysphere-nabs-9-9...

replies(1): >>44538424 #
71. cedws ◴[] No.44538276{4}[source]
Apparently they are, which is crazy to me. Zed agent mode shows modified hunks and you can accept/reject them individually. I can't imagine doing it all through the CLI, it seems extremely primitive.
72. esafak ◴[] No.44538284{3}[source]
"Some AI stuff" can well be worth that.
73. lukan ◴[] No.44538310{4}[source]
Why are there actually only a few people in the world able to do this?

The basic concept is out there.

Lots of smart people studying hard to catch up to also be poached. No shortage of those I assume.

Good trainingsdata still seems the most important to me.

(and lots of hardware)

Or does the specific training still involves lots of smart decisions all the time?

And those small or big decisions make all the difference?

replies(6): >>44538372 #>>44538412 #>>44538433 #>>44539464 #>>44539549 #>>44540716 #
74. threecheese ◴[] No.44538316[source]
Claude Code is just proving that coding agents can be successful. The interface isn’t magic, it just fits the model and integrates with a system in all the right ways. The Anthropic team for that product is very small comparatively (their most prolific contributor is Claude), and I think it’s more of a technology proof than a core competency - it’s a great API $ business lever, but there’s no reason for them to try and win the “agentic coding UI” market. Unless Generative AI flops everywhere else, these markets will continue to emerge and need focus. The Windsurf kerfuffle is further proof that OpenAI doesn’t see the market as must-win for a frontier model shop.

And so I’d say this isn’t a harbinger of the death of Cursor, instead proof that there’s a future in the market they were just recently winning.

replies(2): >>44539652 #>>44542250 #
75. raincole ◴[] No.44538318{3}[source]
> to develop their own frontier coding model

Uh, the irony is that this is exactly what Windsurf tried.

replies(1): >>44538960 #
76. james_marks ◴[] No.44538320[source]
This feeling of, “what exactly is in the file?” is why I have all AI turned off in my IDE, and run CC independently.

I get all AI or none, so it’s always obvious what’s happening.

Completions are OK, but I did not enjoy the feeling of both us having a hand on the wheel and trying to type at the same time.

replies(1): >>44540107 #
77. james_marks ◴[] No.44538330[source]
Haven’t looked recently but when it came out, the story was that it was private by default. It uses a regular API token, which promises no retention.
78. james_marks ◴[] No.44538344{4}[source]
Yes. I manually read the diff of every proposed change and manually accept or deny.

I love CC, but letting it auto-write changes is, at best, a waste of time trying to find the bugs after they start compounding.

replies(1): >>44540671 #
79. jhickok ◴[] No.44538359{6}[source]
Can you give me an idea of how much interaction would be $50-$100 per day? Like are you pretty constantly in a back and forth with CC? And if you wouldn’t mind, any chance you can give me an idea of productivity gains pre/post LLM?
replies(3): >>44538451 #>>44538463 #>>44539883 #
80. ryanobjc ◴[] No.44538366[source]
Just cancelled my Cursor sub due to claude code, so heavily agree.
81. phillipcarter ◴[] No.44538372{5}[source]
I'd recommend reading some of the papers on what it takes to actually train a proper foundation model, such as the Llama 3 Herd of Models paper. It is a deeply sophisticated process.

Coding startups also try to fine-tune OSS models to their own ends. But this is also very difficult, and usually just done as a cost optimization, not as a way to get better functionality.

82. kmarc ◴[] No.44538384[source]
The forked IDE thing I don't understand either, but...

During the evaluation at a previous job, we found that windsurf is waaaay better than anything else. They were expensive (to train on our source code directly) but the solution the offered was outperforming others.

83. nojs ◴[] No.44538403[source]
You can view and navigate the diffs made by the terminal agent in your IDE in realtime, just like Cursor, as well as commit, revert, etc. That’s really all the “integration” you need.
84. RestlessAPI ◴[] No.44538404[source]
I use Windsurf so I remain in the driver's seat. Using AI coding tools too much feels like brain rot where I can't think sharply anymore. Having auto complete guess my next edit as I'm typing is great because I still retain all the control over the code base. There's never any blocks of code that I can't be bothered to look at, because I wrote everything still.
85. sideshownz ◴[] No.44538412{5}[source]
1. Cost to hire is now prohibitive. You're competing against companies like Meta paying tens of millions for top talent.

2. Cost to train is also prohibitive. Grok data centre has 200,000 H100 Graphics cards. Impossible for a startup to compete with this.

replies(1): >>44540063 #
86. xnx ◴[] No.44538418{3}[source]
Isn't Gemini CLI 1000 requests/day free?

https://blog.google/technology/developers/introducing-gemini...

replies(2): >>44540161 #>>44540625 #
87. AJ007 ◴[] No.44538423{6}[source]
Pretty easy to hit $100 an hour using Opus on API credits. The model providers are heavily subsidized, the datacenters appear to be too. If you look at the Coreweave stuff and the private datacenters it starts looking like the telecom bubble. Even Meta is looking to finance datacenter expansion - https://www.reuters.com/business/meta-seeks-29-billion-priva...

The reason they are talking about building new nuclear power plants in the US isn't just for a few training runs, its for inference. At scale the AI tools are going to be extremely expensive.

Also note China produces twice as much electricity as the United States. Software development and agent demand is going to be competitive across industries. You may think, oh I can just use a few hours of this a day and I got a week of work done (happens to me some days), but you are going to end up needing to match what your competitors are doing - not what you got comfortable with. This is the recurring trap of new technology (no capitalism required.)

There is a danger to independent developers becoming reliant on models. $100-$200 is a customer acquisition cost giveaway. The state of the art models probably will end up costing hourly what a human developer costs. There is also the speed and batching part. How willing is the developer to, for example, get 50% off but maybe wait twice as long for the output. Hopefully the good dev models end up only costing $1000-$2000 a month in a year. At least that will be more accessible.

Somewhere in the future these good models will run on device and just cost the price of your hardware. Will it be the AGI models? We will find out.

I wonder how this comment will age, will look back at it in 5 or 10 years.

replies(4): >>44539412 #>>44539537 #>>44539588 #>>44541752 #
88. edoceo ◴[] No.44538424[source]
The cost of the fork isn't creating it, it's maintaining it. But maybe AI could help :/
replies(1): >>44539534 #
89. libraryofbabel ◴[] No.44538433{5}[source]
The basic concept plus a lot of money spent on compute and training data gets you pretraining. After that to get a really good model there’s a lot more fine-tuning / RL steps that companies are pretty secretive about. That is where the “smart decisions” and knowledge gained by training previous generations of sota models comes in.

We’d probably see more companies training their own models if it was cheaper, for sure. Maybe some of them would do very well. But even having a lot of money to throw at this doesn’t guarantee success, e.g. Meta’s Llama 4 was a big disappointment.

That said, it’s not impossible to catch up to close to state-of-the-art, as Deepseek showed.

90. resonious ◴[] No.44538451{7}[source]
Re productivity gains, CC allows me to code during my commute time. Even on a crowded bus/train I can get real work done just with my phone.
replies(3): >>44538523 #>>44538534 #>>44539239 #
91. e1g ◴[] No.44538463{7}[source]
Yes, a lot of usage, I’d guess top 10% among my peers. I do 6-10hrs of constant iterating across mid-size codebases of 750k tokens. CC is set to use Opus by default, which further drives up costs.

Estimating productivity gains is a flame war I don’t want to start, but as a signal: if the CC Max plan goes up 10x in price, I’m still keeping my subscription.

I maintain top-tier subscription to every frontier service (~$1k/mo) and throughout the week spend multiple hours with each of Cursor, Amp, Augment, Windsurf, Codex CLI, Gemini CLI, but keep on defaulting to Claude Code.

replies(4): >>44538573 #>>44538575 #>>44538910 #>>44541859 #
92. brendoelfrendo ◴[] No.44538523{8}[source]
Unless you're getting paid for your commute, you're just giving your employer free productivity. I would recommend doing literally anything else with that time. Read a book, maybe.
replies(2): >>44538592 #>>44539940 #
93. ReaLNero ◴[] No.44538534{8}[source]
What's your workflow if I may ask? I've been interested in the idea as well.
replies(1): >>44538666 #
94. fooster ◴[] No.44538544{4}[source]
I just accept all and review in my editor.
95. firesteelrain ◴[] No.44538553[source]
Windsurf big claim to fame was that you could run their model in airgap and they said they did not train on GPL code. This was an option available for Enterprise customers until they took it away recently to prevent self hosting
96. foolishgame ◴[] No.44538573{8}[source]
I am curious what kind of code development you are doing with so many subscriptions?

Are you doing front end backend full stack or model development itself?

Are you destilling models for training your own?

I have never heard someone using so much subscription?

Is this for your full time job or startup?

Why not use qwen or deep seek and host it yourself?

I am impressed with what you are doing.

replies(1): >>44539445 #
97. jhickok ◴[] No.44538575{8}[source]
Thank you for your perspective. I’ve been staring at Claude Code for a bit and I think I will just pull the trigger.
replies(1): >>44539422 #
98. resonious ◴[] No.44538592{9}[source]
It's for a paid side gig.
99. bn-l ◴[] No.44538642[source]
It was so they could close source it.
replies(1): >>44540356 #
100. absurddoctor ◴[] No.44538658[source]
They quietly released an update to CC earlier today so it can now be run natively on Windows.
101. resonious ◴[] No.44538666{9}[source]
The project is just a web backend. I give Claude Code grunt work tasks. Things like "make X operation also return Y data" or "create Z new model + CRUD operations". Also asking it to implement well-known patterns like denouncing or caching for an existing operation works well.

My app builds and runs fine on Termux, so my CLAUDE.md says to always run unit tests after making changes. So I punch in a request, close my phone for a bit, then check back later and review the diff. Usually takes one or two follow-up asks to get right, but since it always builds and passes tests, I never get complete garbage back.

There are some tasks that I never give it. Most of that is just intuition. Anything I need to understand deeply or care about the implementation of I do myself. And the app was originally hand-built by me, which I think is important - I would not trust CC to design the entire thing from scratch. It's much easier to review changes when you understand the overall architecture deeply.

102. ec109685 ◴[] No.44538681[source]
Good analysis. And Claude code itself will be mercilessly copied, so even if another model jumps ahead, small switching cost.

That said, the creator of Claude Code jumped to Cursor so they must see a there there.

103. sothatsit ◴[] No.44538683{5}[source]
You can tell Claude Code to use opus using /model and then it doesn't fall back to Sonnet btw. I am on the $100 plan and I hit rate-limits every now and then, but not enough to warrant using Sonnet instead of Opus.
104. ec109685 ◴[] No.44538701{3}[source]
Background of how it works: https://kirshatrov.com/posts/claude-code-internals

Prompt: https://gist.github.com/transitive-bullshit/487c9cb52c75a970...

replies(2): >>44538752 #>>44539575 #
105. ec109685 ◴[] No.44538711[source]
Tab completion in cursor lets you keep hitting tab and it will jump to next logical spot in file to keep editing or completing from.
106. neoecos ◴[] No.44538717[source]
I think lots of issues with the integration of CC or other TUI with graphical IDEs will be solved with stuff like the Agentic Coding Protocol that the guys at Zed at working on https://www.npmjs.com/package/@zed-industries/agentic-coding...
replies(1): >>44538739 #
107. bn-l ◴[] No.44538724[source]
> Cursor's @Docs is still unparalleled and no MCP server for documentation

I strongly disagree. It will put the wrong doc snippets into context 99% of the time. If the docs are slightly long then forget it, it’ll be even worse.

I never use it because of this.

replies(1): >>44538749 #
108. npinsker ◴[] No.44538726{3}[source]
It’s probably due to the top comment citing that number
109. bn-l ◴[] No.44538739{3}[source]
I trust zed to get it right over cursor with their continual enshittification.
110. satvikpendem ◴[] No.44538749{3}[source]
What packages do you use it for? I honestly never had that issue, it's very good in my use cases to find some specific function to call or to figure out some specific syntax.
111. RainyDayTmrw ◴[] No.44538752{4}[source]
I'm always surprised how short system prompts are. It makes me wonder where the rest of the app's behavior is encoded.
112. Maxious ◴[] No.44538774{3}[source]
As part of this whole drama, the APIs that Copilot uses are being opened up https://code.visualstudio.com/blogs/2025/06/30/openSourceAIE...
113. bilsbie ◴[] No.44538894[source]
Is Claude code expensive? Can you control the costs or can it surprise you.
replies(1): >>44538901 #
114. bilsbie ◴[] No.44538898{6}[source]
Is there a cheap version for hobbyists? Or what’s the best thing for hobbyists to use, just cut and paste?
replies(5): >>44538974 #>>44539014 #>>44539022 #>>44539380 #>>44541876 #
115. aaronbrethorst ◴[] No.44538901[source]
On a subscription, it is 100% predictable: $20, $100, or $200/month
116. jonstewart ◴[] No.44538910{8}[source]
I am curious what kind of development you’re doing and where your projects fall on the fast iteration<->correctness curve (no judgment). I’ve used CC Pro for a few weeks now and I will keep it, it’s fantastically useful for some things, but it has wasted more of my time than it saved when I’ve experimented with giving it harder tasks.
replies(1): >>44539957 #
117. anon7000 ◴[] No.44538937{3}[source]
I’ve definitely observed that CC is waaaay slower than cursor
118. chatmasta ◴[] No.44538939[source]
The Microsoft investments in both VSCode and GitHub are looking incredibly prescient.
119. handfuloflight ◴[] No.44538941{3}[source]
Or Cursor just gave him a better deal?
120. stogot ◴[] No.44538960{4}[source]
Why did they fail?
replies(1): >>44540071 #
121. hanklazard ◴[] No.44538974{7}[source]
Cursor at 20$/M is pretty great
122. taxborn ◴[] No.44539014{7}[source]
I've been enjoying Zed lately
replies(1): >>44539837 #
123. mrmincent ◴[] No.44539022{7}[source]
Claude Code pro is ~$20USD/ month and is nearly enough for someone like me who can’t use it at work and is just playing around with it after work. I’m loving it.
124. Abishek_Muthian ◴[] No.44539043[source]
> - Tab completion model (Cursor's remaining moat)

My local ollama + continue + Qwen 2.5 coder gives good tab completion with minimal latency; how much better is Cursor’s tab completion model?

I’m still weary of letting LLM edit my code so my local setup gives me sufficient assistance with tab completion and occasional chat.

replies(1): >>44542063 #
125. rolisz ◴[] No.44539073{3}[source]
Before they announced the Max plans, I could easily hit 10-15$ of API usage per day (without even being a heavy user).

Since they announced that you can use the Pro subscription with Claude Code, I've been using it much more and I've never ever been rate limited.

replies(2): >>44539179 #>>44539503 #
126. fipar ◴[] No.44539087{3}[source]
Just to offer a different perspective, I use Cursor at work and, coming from emacs (which I still use) with copilot completions only when I request them with a shortcut, Cursor’s behavior drives me crazy.
replies(1): >>44539476 #
127. csomar ◴[] No.44539146{3}[source]
The way I am doing the math with my Max subscription and assuming DeepSeek API prices, it is still x5 times cheaper. So either DeepSeek is losing money (unlikely) or Anthropic is losing lots of money (more likely). Grok kinda confirms my suspicions. Assuming DeepSeek prices, I've probably spent north of $100 of Grok compute. I didn't pay Grok or Twitter a single cent. $100 is a lot of loss for a single user.
replies(3): >>44540032 #>>44540348 #>>44541807 #
128. 3uler ◴[] No.44539179{4}[source]
This is what I don’t get about the cost being reported by Claude code. At work I use it against our AWS Bedrock instance, and most sessions will say 15/20 dollars and I’ll have multiple agents running. So I can easily spend 60 bucks a day in reported cost. Our AWS Bedrock bill is only a small fraction of that? Why would you over charge on direct usage of your API?
replies(1): >>44540747 #
129. mat_b ◴[] No.44539185{4}[source]
I also use vim heavily and I've found that I'm really enjoying Cursor + VS Code Vim extension. The cursor tab completion works very nicely in conjunction with vim navigate mode.
130. insane_dreamer ◴[] No.44539225{3}[source]
JetBrains has CC integration where CC runs in a terminal window but uses the IDE (i.e., Pycharm) for diffing. Works well.
131. dwohnitmok ◴[] No.44539239{8}[source]
How do you use Claude Code via your phone?
replies(1): >>44539542 #
132. osigurdson ◴[] No.44539254[source]
>> Claude Code has absolutely exploded

Does anyone have a comparison between this and OpenAI Codex? I find OpenAI's thing really good actually (vastly better workflow that Windsurf). Maybe I am missing out however.

replies(2): >>44540151 #>>44540303 #
133. TeMPOraL ◴[] No.44539380{7}[source]
Claude Code with a Claude subscription is the cheap version for current SOTA.

"Agentic" workflows burn through tokens like there's no tomorrow, and the new Opus model is so expensive per-token that the Max plan pays itself back in one or two days of moderate usage. When people reports their Claude Code sessions costing $100+ per day, I read that as the API price equivalent - it makes no sense to actually "pay as you go" with Claude right now.

This is arguably the cheapest option available on the market right now in terms of results per dollar, but only if you can afford the subscription itself. There's also time/value component here: on Max x5, it's quite easy to hit the usage limits of Opus (fortunately the limit is per 5 hours or so); Max x20 is only twice the price of Max x5 but gives you 4x more Opus; better model = less time spent fighting with and cleaning up after the AI. It's expensive to be poor, unfortunately.

replies(1): >>44541302 #
134. SV_BubbleTime ◴[] No.44539412{7}[source]
> Pretty easy to hit $100 an hour

I don’t see how that can be true, but if it is…

Either you, or I are definitely use Claude Code incorrectly.

replies(2): >>44540261 #>>44542130 #
135. SV_BubbleTime ◴[] No.44539422{9}[source]
It’s a wild frontier, but as a recent convert to CC, I would say go for it.

It’s so stupid fast to get running that you aren’t out anything if you don’t like it.

There was no way I was going to switch to a different IDE.

136. milofeynman ◴[] No.44539424{3}[source]
Using cline for a bit made me realize cursor was doomed. Everything is just a gpt/anthropic wrapper of fancy prompts.

I can do most of what I want with cline, and I've gone back from large changes to just small changes and been moving much quicker. Large refactors/changes start to deviate from what you actually want to accomplish unless you have written a dissertation, and even then they fail.

replies(1): >>44542190 #
137. e1g ◴[] No.44539445{9}[source]
I’m a founder/CTO of an enterprise SaaS, and I code everything from data modeling, to algos, backend integrations, frontend architecture, UI widgets, etc. All in TypeScript, which is perfectly suited to LLMs because we can fit the types and repo map into context without loading all code.

As to “why”: I’ve been coding for 25 years, and LLMs is the first technology that has a non-linear impact on my output. It’s simultaneously moronic and jaw-dropping. I’m good at what I do (eg, merged fixes into Node) and Claude/o3 regularly finds material edge cases in my code that I was confident in. Then they add a test case (as per our style), write a fix, and update docs/examples within two minutes.

I love coding and the art&craft of software development. I’ve written millions of lines of revenue generating code, and made millions doing it. If someone forced me to stop using LLMs in my production process, I’d quit on the spot.

Why not self host: open source models are a generation behind SOTA. R1 is just not in the same league as the pro commercial models.

replies(4): >>44539609 #>>44539686 #>>44540495 #>>44541076 #
138. riwsky ◴[] No.44539464{5}[source]
Because it’s not about “who can do it”, it’s about “who can do it the best”.

It’s the difference between running a marathon (impressive) and winning a marathon (here’s a giant sponsorship check).

139. MkLouis ◴[] No.44539476{4}[source]
Which Emacs Package do you use for CoPilot, i tried using Copilot.el a long while ago, but had problems with it. Is there something new or does copilot.el fulfill your needs?
140. asaddhamani ◴[] No.44539503{4}[source]
API prices are way higher than actual inference cost.
141. brundolf ◴[] No.44539528[source]
I also just prefer CC's UX. I've tried to make myself use Copilot and Roo and I just couldn't. The extra mental overhead and UI context-switching took me out of the flow. And tab completion has never felt valuable to me.

But the chat UX is so simple it doesn't take up any extra brain-cycles. It's easier to alt-tab to and from; it feels like slacking a coworker. I can have one or more terminal windows open with agents I'm managing, and still monitor/intervene in my editor as they work. Fits much nicer with my brain, and accelerates my flow instead of disrupting it

There's something starkly different for me about not having to think about exactly what context to feed to the tool, which text to highlight or tabs to open, which predefined agent to select, which IDE button to press

Just formulate my concepts and intent and then express those in words. If I need to be more precise in my words then I will be, but I stay in a concepts + words headspace. That's very important for conserving my own mental context window

142. whatevaa ◴[] No.44539534{3}[source]
The cost of vscode fork is that microsoft has restricted extension marketplace for forks. You have to maintain separate one, that is the real dealbreaker
replies(2): >>44541272 #>>44541604 #
143. manmal ◴[] No.44539537{7}[source]
The SOTA models will always run in data centers, because they have 5x or more VRAM and 10-100x the compute allowance. Plus, they can make good use of scaling w/ batch inference which is a huge power savings, and which a single developer machine doesn’t make full use of.
144. manmal ◴[] No.44539542{9}[source]
vibetunnel.sh perhaps
145. seanhunter ◴[] No.44539549{5}[source]
Why are there so few people in the world able to run 100m in sub 10s?

The basic concept is out there: run very fast.

Lots of people running every day who could be poached. No shortage of those I assume.

Good running shoes still seem the most important to me.

replies(1): >>44540713 #
146. manmal ◴[] No.44539575{4}[source]
Also check out claude-trace, which injects fetch hooks to get at the data: https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...
147. dostick ◴[] No.44539588{7}[source]
Why “no capitalism required”? Competition of this kind is only possible with capitalism.
replies(3): >>44539970 #>>44541035 #>>44541378 #
148. atonse ◴[] No.44539609{10}[source]
> If someone forced me to stop using LLMs in my production process, I’d quit on the spot.

Yup 100% agree. I’d rather try to convince them of the benefits than go back to what feels like an unnecessarily inefficient process of writing all code by hand again.

And I’ve got 25+ years of solid coding experience. Never going back.

149. NitpickLawyer ◴[] No.44539623[source]
> Wouldn't an extension suffice?

Not if you want custom UI. There are a lot of things you can do in extension land (continue, cline, roocode, kilocode, etc. are good examples) but there are some things you can't.

One thing I would have thought would be really cool to try is to integrate it at the LSP level, and use all that good stuff, but apparently people trying (I think there was a company from .il trying) either went closed or didn't release anything note worthy...

150. groggo ◴[] No.44539640{3}[source]
I haven't used Cursor or Claude much, how different is it from Copilot? I bounce between desktop ChatGPT (which can update VS Code) and copilot. Is there an impression that those have fallen behind?
151. extr ◴[] No.44539652[source]
I was being hyperbolic saying their ARR will go to zero. That's obviously not the case, but the point is that CC has revealed their real product was not "agentic coding UI", it was "insanely cheap tokens". I have no doubt they will continue to see success, but their future right now looks closer to being a competitor to free/open tools like cline/roo code, as well as the CLI entrants, not a standalone $500M ARR juggarnaut. They have no horse in the race in the token market, they're a middleman.

They either need to create their own model and compete on cost, or hope that token costs come down dramatically so as to be too cheap to meter.

152. ineedasername ◴[] No.44539686{10}[source]
When you say generation behind, can you give a sense of what that means in functionality per your current use? Slower/lower quality, it would take more iterations to get what you want?
153. notpushkin ◴[] No.44539837{8}[source]
Zed is fantastic. Just dipping my toes in agentic AI, but I was able to fix a failing test I spent maybe 15 minutes trying to untangle in a couple minutes with Zed. (It did proceed to break other tests in that file though, but I quickly reverted that.)

It is also BYOA or you can buy a subscription from Zed themselves and help them out. I currently use it with my free Copilot+ subscription (GitHub hands it out to pretty much any free/open source dev).

154. mekpro ◴[] No.44539883{7}[source]
you can easily reach 50$ per day. by force switching model to opus /model opus it will continue to use opus eventhough there is a warning about approaching limit.

i found opus is significantly more capable in coding than sonnet, especcially for the task that is poorly defined, thinking mode can fulfill alot of missing detail and you just need to edit a little before let it code.

replies(1): >>44540731 #
155. coolspot ◴[] No.44539888{3}[source]
Github Copilot just added that about a week ago.
156. ◴[] No.44539889{4}[source]
157. princevegeta89 ◴[] No.44539892{3}[source]
I replaced. My opinion: Cursor sucks as an IDE. Cursor may have a average to above average quality in IDE assistance - but the IDE seems to get in the way. It's entire performance is based on the real-time performance and latency from their servers and sometimes it is way too slow. The TAB autocomplete that was working for you in the last 30 minutes suddenly doesn't work randomly, or just experiences severe delays that it stops making sense.

Besides that, the IDE seems poorly designed - some navigation options are confusing and it makes way too many intrusive changes (ex: automatically finishing strings).

I've since gone back to VS Code - with Cline (with OpenRouter and super cheap Qwen Coder models, Windsurf FREE, Claude Code with $20 per month) and I get great mileage from all of them.

158. positr0n ◴[] No.44539940{9}[source]
Everywhere I've worked as a programmer you're just paid to do your job. If you get some of it done on your commute what difference does it make?
159. brailsafe ◴[] No.44539957{9}[source]
It's interesting to work with a number of people using various models and interaction modes in slightly different capacities. I can see where the huge productivity gains are and can feel them, but the same is true for the opposite. I'm pretty sure I lost a full day or more trying to track down a build error because it was relatively trivial fpr someone to ask CC or something to refactor a ton of files, which it seems to have done a bit too eagerly. On the other hand, that refactor would have been super tedious, so maybe worth it?
160. tsimionescu ◴[] No.44539970{8}[source]
Not really, it's possible with any market economy, even a hypothetical socialist one (that is, one where all market actors are worker-owned co-ops).

And, since there is no global super-state, the world economy is a market economy, so even if every state were a state-owned planned economy, North Korea style, still there would exist this type of competition between states.

replies(1): >>44540048 #
161. jdkoeck ◴[] No.44540019{3}[source]
Honestly, I think the Claude Code integration in VS Code is very close to the « nothing » part of the spectrum!
162. tonyhart7 ◴[] No.44540032{4}[source]
what?? sonnet/opus is way better than deepseek, how can you compare that to deepseek

also you probably talking about distilled deepseek model

replies(1): >>44540528 #
163. 0xDEAFBEAD ◴[] No.44540048{9}[source]
I mean, if you wanna get technical, many companies in Silicon Valley are worker-owned (equity compensation)
replies(1): >>44540217 #
164. tonyhart7 ◴[] No.44540063{6}[source]
"Impossible for a startup to compete with this."

its funny to me since xAI literally the "youngest" in this space and recently made an Grok4 that surpass all frontier model

it literally not impossible

replies(2): >>44540242 #>>44540277 #
165. jonny_eh ◴[] No.44540071{5}[source]
It's both hard AND expensive.
166. acka ◴[] No.44540107{3}[source]
It gets even worse when all three of IntelliSense, AI completion, and the human are all vying for control of the input. This can be very frustrating at times.
167. sunaookami ◴[] No.44540151[source]
Codex CLI is very bad, it often struggles to even find the file and goes on a rampage inside the home directory trying to find the file and commenting on random folders. Using o3/o4-mini in Aider is decent though.
replies(1): >>44542265 #
168. fnordpiglet ◴[] No.44540158[source]
I use Augment extensively and find it superior to cursor in every way - and operates as an extension. It has a really handy task planning interface and meta prompt refinement feature and the costs are remarkably low. The quality of output implantation is higher IMO and I don’t have to do a lot of model selection and don’t get Max model bill explosions. If there’s something Cursor provided that Augment doesn’t via extension it was not functionally useful enough to notice.
replies(1): >>44540405 #
169. sunaookami ◴[] No.44540161{4}[source]
It's false advertising - 1000 model requests, not 1000 Gemini 2.5 Pro requests. It drops to Flash after 3-5 requests and Flash is useless.
170. petesergeant ◴[] No.44540170[source]
> I don't think you can beat the experience that an IDE + coding agent brings with a terminal-based coding agent.

I resisted moving from Roo in VS Code to CC for this reason, and then tried it for a day, and didn't go back.

171. N3cr0ph4g1st ◴[] No.44540186[source]
Context7 mcp
replies(1): >>44541587 #
172. tsimionescu ◴[] No.44540217{10}[source]
They are not worker owned, they have some small amount of worker ownership. But the majority of stock is never owned by workers, other than the CEO.
replies(1): >>44540412 #
173. lukan ◴[] No.44540242{7}[source]
I mean, that's a startup backed by the richest man in the world who also was engaged with OpenAI in the beginning.

I assume startup here means the average one, that has a little bit less of funding and connections.

replies(1): >>44540382 #
174. anonzzzies ◴[] No.44540250[source]
I think CC is just far more useful; I use it for literally everything and without MCP (except puppeteer sometimes) as it just writes python/bash scripts to do that far better than all those hacked together MCP garbage bins. It controls my computer & writes code. It made me better as well as now I actually write code, including GUI/web apps, that's are always fully scriptable. It helps me, but it definitely helps CC; it can just interrogate/test everything I make without puppeteer (or other web browser control, which is always brittle as hell).
175. shinycode ◴[] No.44540261{8}[source]
It’s definitely easy with an API key I hit 200$ in an evening. I didn’t think that could be possible. Horrifying
replies(1): >>44540637 #
176. ako ◴[] No.44540277{7}[source]
Most startups don't have Elon Musk's money.
177. coolKid721 ◴[] No.44540303[source]
never met anyone who used codex lol
178. moltar ◴[] No.44540304[source]
And open source tools like aider are, of course, even more validated and get more eyes.

Plus recently launched OpenCode, open source CC is gaining traction fast.

There was always very little moat in the model wrapper.

The main value of CC is the free tool built by people who understand all the internals of their own models.

179. josephcooney ◴[] No.44540321{3}[source]
interestingly windsurf have done this (I'm not sure how frontier this model is...but it's their own model) https://windsurf.com/blog/windsurf-wave-9-swe-1 but AFAIK cursor have not.
180. selvan ◴[] No.44540339[source]
Cursor - co-pilot/AI pair programming usecases.

Claude Code - Agentic/Autonomous coding usecases.

Both have their own place in programming, though there are overlaps.

181. ◴[] No.44540348{4}[source]
182. justincormack ◴[] No.44540356{3}[source]
You can ship clised source extensions
183. tonyhart7 ◴[] No.44540382{8}[source]
so is Meta(fb) and Apple but that doesn't seem to be the case

money is "less" important factor, I don't say they don't matters but much less than you would think

184. sumedh ◴[] No.44540386[source]
Enterprise pay more.
185. atombender ◴[] No.44540405{3}[source]
I think Augment has been flying under the radar for many people, and really reserve better marketing.

I've been using Augment for over a year with IntelliJ, and never understood why my colleagues were all raving about Cursor and Windsurf. I gave Cursor a real try, but it wasn't any better, and the value proposition of having to adopt a dedicated IDE wasn't attractive to me.

A plugin to leverage your existing tools makes a lot more sense than an IDE. Or at least until/if AI agents get so smart that you don't need most of the IDE's functionality, which might change what kinds of tooling are needed when you're in the passenger seat rather than the driver's seat.

186. sumedh ◴[] No.44540408{4}[source]
I use windsurf to check the diff from Claude Code.
187. khurs ◴[] No.44540409[source]
>What does Cursor/Windsurf offer over VS Code + CC?

A lot of devs are not superstar devs.

They don't want a terminal tool, or anything they have to configure.

A IDE you can just download and 'it just works' has value. And there are companies that will pay.

replies(3): >>44540433 #>>44540959 #>>44541530 #
188. 0xDEAFBEAD ◴[] No.44540412{11}[source]
Consider also that VC funds often have pension funds as their limited partners. Workers have a claim to their pension, and thus a claim to the startup returns that the VC invests in.

So yeah it basically comes down to your definition of "worker-owned". What fraction of worker ownership is necessary? Do C-level execs count as workers? Can it be "worker-owned" if the "workers" are people working elsewhere?

Beyond the "worker-owned" terminology, why is this distinction supposed to matter exactly? Supposing there was an SV startup that was relatively generous with equity compensation, so over 50% of equity is owned by non-C-level employees. What would you expect to change, if anything, if that threshold was passed?

189. dukeyukey ◴[] No.44540433[source]
CC _is_ that took. npm install, login, give tasks. Diff automatically appears in your IDE (in VSC/Intellij at least).
190. teiferer ◴[] No.44540490{4}[source]
That's the same order of magnitude though.
191. sebastianz ◴[] No.44540495{10}[source]
> data modeling, to algos, backend integrations, frontend architecture, UI widgets, etc. All in TypeScript, which is perfectly suited to LLMs because we can fit the types and repo map into context without loading all code.

Which frameworks & libraries have you found work well in this (agentic) context? I feel much of the js lib. landscape does not do enough to enforce an easily-understood project structure that would "constrain" the architecture and force modularity. (I might have this bias from my many years of work with Rails that is highly opinionated in this regard).

192. nurettin ◴[] No.44540528{5}[source]
I haven't tried deepseek but I've seen claude do crazy things if you are at the correct random.seed
193. threatripper ◴[] No.44540530[source]
But Cursor is also offering OpenAI and Google models.
194. upcoming-sesame ◴[] No.44540625{4}[source]
Theoretically, practically it is riddled with rate limiting issues

https://github.com/google-gemini/gemini-cli/issues/1502

195. DANmode ◴[] No.44540637{9}[source]
To be clear, this is a lot of full-scale reading and (re)writing, without any rules, promots, "agents"/code to limit your resource usage, right?

Nobody's asking for $200 in single-line diffs in less than a day - right?

replies(2): >>44540680 #>>44541387 #
196. upcoming-sesame ◴[] No.44540671{5}[source]
it seems like CC is king at the moment from what I read.

I currently have a Copilot subscription that has 4.1 for free but Sonnet 4 and Gemini Pro 2.5 with monthly limits. Thinking to switch to CC

I am curious to know which Claude Code subscription most people are using... ?

197. shinycode ◴[] No.44540680{10}[source]
It’s not about a single line diff but the same prompts with cursor does not end costing that much
198. ◴[] No.44540713{6}[source]
199. vachina ◴[] No.44540716{5}[source]
You need a person that can hit the ground running. Compute for LLM is extremely capital intensive and you’re always racing against time. Missing performance targets can mean life or death of the company.
200. upcoming-sesame ◴[] No.44540731{8}[source]
wow. haven't tried Opus but Sonnet 4 is already damn good.
201. mike_hearn ◴[] No.44540747{5}[source]
Anthropic has costs beyond their AWS bill ....
202. loandbehold ◴[] No.44540959[source]
You don't need to be a superstar dev to use CC. If you can use chat window you can use CC.
203. firecall ◴[] No.44541020[source]
Unless I’m understanding it wrong, the Tab Completion in Cursor isn’t a moat anymore.

VSCode & CoPilot now offer it.

Is it as good? Maybe not.

But they are really working hard over there at Copilot and seem to be catching up.

I get an Edu license for Copilot, so just ditched Cursor!

replies(1): >>44541084 #
204. pembrook ◴[] No.44541035{8}[source]
Have you been human before? competition for resources and status is an instinctive trait.

It rears its head regardless of what sociopolitical environment you place us in.

You’re either competing to offer better products or services to customers…or you’re competing for your position in the breadline or politburo via black markets.

205. throwaway2037 ◴[] No.44541076{10}[source]

    > I’ve written millions of lines of revenue generating code
This is a wild claim.

Approx 250 working days in a year. 25 years coding. Just one million lines would be phenom output, at 160 lines per day forever. Now you are claiming multiple millions? Come on.

replies(2): >>44541322 #>>44541390 #
206. andrewingram ◴[] No.44541084[source]
I agree it has a good chance of catching up, but the difference in quality is pretty noticeable today. I'd much rather stick with vscode, because I hate all the subtle ways Cursor changes the UI; like taking over the keyboard shortcut for clearing the scrollback in the terminal. But I find it's pretty hard to use Copilot's tab completion after using Cursor for a while.
207. redhale ◴[] No.44541176[source]
Cursor's multi-file tab completion and multi-file diff experience are worth $20 easily IMO.

I truly do not understand people's affinity for a CLI interface for coding agents. Scriptability I understand, but surely we could agree that CC with Cursor's UX would be superior to CC's terminal alone, right? That's why CC is pushing IDE integration -- they're just not there yet.

208. blackoil ◴[] No.44541272{4}[source]
Eclipse maintains a public repo.
209. leptons ◴[] No.44541302{8}[source]
>less time spent fighting with and cleaning up after the AI.

I've yet to use anything but copilot in vscode, which is 1/2 the time helpful, and 1/2 wasting my time. For me it's almost break-even, if I don't count the frustration it causes.

I've been reading all these AI-related comment sections and none of it is convincing me there is really anything better out there. AI seems like break-even at best, but usually it's just "fighting with and cleaning up after the AI", and I'm really not interested in doing any of that. I was a lot happier when I wasn't constantly being shown bad code that I need to read and decide about, when I'm perfectly capable of writing the code myself without the hasle of AI getting in my way.

AI burnout is probably already a thing, and I'm close to that point already. I do not have hope that it will get much better than it is, as the core of the tech is essentially just a guessing game.

replies(1): >>44542186 #
210. codedokode ◴[] No.44541322{11}[source]
100-200 lines per day, written, debugged, tested and deployed, is normal performance, isn't it? I think I could do it if worked for 8 hours.
211. AJ007 ◴[] No.44541378{8}[source]
Unfortunately it's called war and it appears to be part of human nature.
212. AJ007 ◴[] No.44541387{10}[source]
Right, this is having Claude Code just running as an agent doing a lot of stuff. Also tool use is a big context hog here.
213. ohdeargodno ◴[] No.44541390{11}[source]
Uh... Totaling +1000 at the end of a work week is an easy thing to do, especially if working on a new/evolving product.
214. adidoit ◴[] No.44541463[source]
Not sure this is true. Inference margins are substantial and if you look at your claude code usage it's very clever at caching

  Input │      Output │  Cache Create │     Cache Read
 916,134 │  11,106,507 │   199,684,538 │  2,767,614,506

as an example here's my usage. Massive daily usage for the past two months.
215. old_man_cato ◴[] No.44541530[source]
A lot of engineers underestimate the learning curve required to jump from IDE to terminal. Multiple generations of engineers were raised on IDEs. It's really hard to break that mental model.
216. iwontberude ◴[] No.44541551[source]
I don’t see how there will be any money to be made in this industry once these models are quantized and all local. It’s going to be one of the most painful bubble deflations we have ever seen and the biggest success of open source in our lifetimes.
217. dboreham ◴[] No.44541569[source]
Conversely Cursor is still broken on WSL2.
218. satvikpendem ◴[] No.44541587{3}[source]
Tried it, doesn't work that great
219. notpushkin ◴[] No.44541604{4}[source]
https://open-vsx.org/
220. mark_l_watson ◴[] No.44541752{7}[source]
Your excellent comments make me grateful that I am retired and just work part time on my own research and learning. I believe you when you say professional developers will need large inference compute budgets.

Probably because I am an old man, but I don’t personally vibe with full time AI assistant use, rather I will use the best models available for brief periods on specific problems.

Ironically, when I do use the best models available to me it is almost always to work on making weaker and smaller models running on Ollama more effective for my interests.

BTW, I have used neural network tech in production since 1985, and I am thrilled by the rate of progress, but worry about such externalities as energy use, environmental factors, and hurting the job market for many young people.

replies(1): >>44542141 #
221. manojlds ◴[] No.44541786[source]
Since then OpenAI has released Codex as well (the web one)
222. manojlds ◴[] No.44541798[source]
If open models become big, open coding agents would be bigger at that point. Even more motivation as well.
223. manojlds ◴[] No.44541807{4}[source]
Comparison should be with Claude API pricing. It doesn't matter what other models cost.
224. mark_l_watson ◴[] No.44541859{8}[source]
Mostly to save money (I am retired) I mostly use Gemini APIs. I used to also use good open weight models on groq.com, but life is simpler just using Gemini.

Ultimately, my not using the best tools for my personal research projects has zero effect on the world but I am still very curious what elite developers with the best tools can accomplish, and what capability I am ‘leaving on the table.’

225. lvl155 ◴[] No.44541868[source]
Which is interesting because Sonnet is cheap and Opus is not on par with o3 for tasks where you want to deploy it.
226. mark_l_watson ◴[] No.44541876{7}[source]
If you are a hobbyist, just use Google’s gemini-cli (currently free!) on a half dozen projects to get experience.
227. mark_l_watson ◴[] No.44542063[source]
I often use the same setup. Qwen 2.5 coder is very good on its own, but my Emacs setup doesn’t also use web search when that would be appropriate. I have separately been experimenting with the Perplexity Sonar APIs that combine models and search, but I don’t have that integrated with my Emacs and Qwen setup - and that automatic integration would be very difficult to do well! If I could ‘automatically’ use a local Qwen, or other model, and fall back to using a paid service like Perplexity or Gemini grounding APIs just when needed that would be fine indeed.

I am thinking about a new setup as I write this: in Emacs, I explicitly choose a local Ollama model or a paid API like Gemini or OpenAI, so I should just make calling Perplexity Sonar APIs another manual choice. (Currently I only use Perplexity from Python scripts.)

If I owned a company, I would frequently evaluate privacy and security aspects of using commercial APIs. Using Ollama solves that.

228. mark_l_watson ◴[] No.44542079{3}[source]
Wait a minute, have you often run out of the gemini cli free daily quota? Their free quota is very generous because they are trying to get market/mind share.
replies(1): >>44542147 #
229. macrolime ◴[] No.44542130{8}[source]
This is around what what Cursor was costing me with Claude 4 Opus before I switched to Claude Code. Sonnet works fine for some things, but for some projects it spews unusable garbage, unless the specification is so detailed that it's almost the implementation already.
replies(1): >>44542399 #
230. AJ007 ◴[] No.44542141{8}[source]
I've been around for a while (not quite retirement age) and this time is the closest to the new feeling I had using the internet and web in the early days. There are simultaneously infinite possibilities but also great uncertainty what pathways will be taken and how things will end up.

There are a lot of parts in the near term to dislike here, especially the consequences for privacy, adtech, energy use. I do have concerns that the greatest pitfalls in the short terms are being ignored while other uncertainties are being exaggerated. (I've been warning on deep learning model use for recommendation engines for years, and only a sliver of people seem to have picked up on that one, for example.)

On the other hand, if good enough models can run locally, humans can end up with a lot more autonomy and choice with their software and operating systems than they have today. The most powerful models might run on supercomputers and just be solving the really big science problems. There is a lot of fantastic software out there that does not improve by throwing infinite resources at it.

Another consideration is while the big tech firms are spending (what will likely approach) hundreds of billions of dollars in a race to "AGI", what matters to those same companies even more than winning is making sure that the winner isn't a winner takes all. In that case, hopefully the outcome looks more like open source.

231. apwell23 ◴[] No.44542147{4}[source]
it switches to flash almost immediately like in 10-15 mins . flash sucks.

And even switching is not smooth either. for me when the switch happens it just get stuck sitting there so i have to restart cli.

232. dgacmu ◴[] No.44542186{9}[source]
I tend to agree except for one recent experience: I built a quick prototype of an application whose backend I had written twice before and finally wanted to do right. But the existing infrastructure for it had bit-rotted, and I am definitely not a UI person. Every time I dive into html+js I have to spend hours updating my years-out-of-date knowledge of how to do things.

So I vibe coded it. I was extremely specific about how the back end should operate and pretty vague about the UI, and basically everything worked.

But there were a few things about this one: first, it was just a prototype. I wanted to kick around some ideas quickly, and I didn't care at all about code quality. Second, I already knew exactly how to do the hard parts in the back end, so part of the prompt input was the architecture and mechanism that I wanted.

But it spat out that html app way way faster than I could have.

233. mehphp ◴[] No.44542190{4}[source]
I agree with all you’ve said but with regards to writing a dissertation for larger changes : have you tried letting it first right a plan for you as markdown (just keep this file uncommitted) and then let it build a checklist of things to do?

I find just referencing this file over and over works wonders and it respects items that were already checked off really well.

I can get a lot done really fast this way in small enough chunks so i know every bit of code and how it works (tweaking manually of course where needed).

But I can blow through some tickets way faster than before this way.

234. hv23 ◴[] No.44542250[source]
Digging in here more... why would you say it isn't in Anthropic's interest to win the "agentic coding UI" market?

My mental model is that these foundation model companies will need to invest in and win in a significant number of the app layer markets in order to realize enough revenue to drive returns. And if coding / agentic coding is one of the top X use cases for tokens at the app layer, seems logical that they'd want to be a winner in this market.

Is your view that these companies will be content to win at the model layer and be agnostic as to the app layer?

235. osigurdson ◴[] No.44542265{3}[source]
It isn't a cli thing, it is available in the ChatGPT ui. I've been using it for a few weeks.
236. mnky9800n ◴[] No.44542316{6}[source]
Shhh don’t say that. I love Max. I don’t want it to go anywhere.
237. SV_BubbleTime ◴[] No.44542399{9}[source]
> unless the specification is so detailed that it's almost the implementation already.

You mean… it’s almost exactly like working with interns and jr developers? ;)