←back to thread

883 points rcchen | 1 comments | | HN request time: 0.22s | source
Show context
extr ◴[] No.44537358[source]
IMO other than the Microsoft IP issue, I think the biggest thing that has shifted since this acquisition was first in the works is Claude Code has absolutely exploded. Forking an IDE and all the expense that comes with that feels like a waste of effort, considering the number of free/open source CLI agentic tools that are out there.

Let's review the current state of things:

- Terminal CLI agents are several orders of magnitude less $$$ to develop than forking an entire IDE.

- CC is dead simple to onboard (use whatever IDE you're using now, with a simple extension for some UX improvements).

- Anthropic is free to aggressively undercut their own API margins (and middlemen like Cursor) in exchange for more predictable subscription revenue + training data access.

What does Cursor/Windsurf offer over VS Code + CC?

- Tab completion model (Cursor's remaining moat)

- Some UI niceties like "add selection to chat", and etc.

Personally I think this is a harbinger of where things are going. Cursor was fastest to $900M ARR and IMO will be fastest back down again.

replies(41): >>44537388 #>>44537433 #>>44537440 #>>44537454 #>>44537465 #>>44537526 #>>44537594 #>>44537613 #>>44537619 #>>44537711 #>>44537749 #>>44537830 #>>44537848 #>>44537853 #>>44537964 #>>44538026 #>>44538053 #>>44538066 #>>44538259 #>>44538272 #>>44538316 #>>44538366 #>>44538384 #>>44538404 #>>44538553 #>>44538681 #>>44538894 #>>44538939 #>>44539043 #>>44539254 #>>44539528 #>>44540250 #>>44540304 #>>44540339 #>>44540409 #>>44541020 #>>44541176 #>>44541551 #>>44541786 #>>44542617 #>>44542673 #
Abishek_Muthian ◴[] No.44539043[source]
> - Tab completion model (Cursor's remaining moat)

My local ollama + continue + Qwen 2.5 coder gives good tab completion with minimal latency; how much better is Cursor’s tab completion model?

I’m still weary of letting LLM edit my code so my local setup gives me sufficient assistance with tab completion and occasional chat.

replies(1): >>44542063 #
1. mark_l_watson ◴[] No.44542063[source]
I often use the same setup. Qwen 2.5 coder is very good on its own, but my Emacs setup doesn’t also use web search when that would be appropriate. I have separately been experimenting with the Perplexity Sonar APIs that combine models and search, but I don’t have that integrated with my Emacs and Qwen setup - and that automatic integration would be very difficult to do well! If I could ‘automatically’ use a local Qwen, or other model, and fall back to using a paid service like Perplexity or Gemini grounding APIs just when needed that would be fine indeed.

I am thinking about a new setup as I write this: in Emacs, I explicitly choose a local Ollama model or a paid API like Gemini or OpenAI, so I should just make calling Perplexity Sonar APIs another manual choice. (Currently I only use Perplexity from Python scripts.)

If I owned a company, I would frequently evaluate privacy and security aspects of using commercial APIs. Using Ollama solves that.