←back to thread

469 points samuelstros | 10 comments | | HN request time: 0.001s | source | bottom
Show context
brokegrammer ◴[] No.45001678[source]
I don't get it. The title says "What makes Claude Code so damn good", which implies that they will show how Claude Code is better than other tools, or just better in general. But they go about repeating the Claude Code documentation using different wording.

Am I missing something here? Or is this just Anthropic shilling?

replies(5): >>45001719 #>>45001947 #>>45003495 #>>45003706 #>>45009343 #
nuwandavek ◴[] No.45001719[source]
(blogpost author here) Haha, that's totally fair. I've read a whole bunch of posts comparing CC to other tools, or with a dump of the the architecture. This post was mainly for people who've used CC extensively, know for a fact that it is better and wonder how to ship such an experience in their own apps.
replies(1): >>45001798 #
1. brokegrammer ◴[] No.45001798[source]
I've used Claude Code, Cursor, and Copilot is Vscode and I don't "know" that Claude Code is better apart from the fact that it runs in the terminal, which makes it a little faster but less ergonomic than tools running inside the editor. All of the context tricks can be done with Copilot instructions as well, so I simply can't see how Claude Code is superior.
replies(2): >>45001995 #>>45004241 #
2. techwiz137 ◴[] No.45001995[source]
For code generation, nothing so far beats Opus. More likely than not it generated working code and fixed bugs that Gemini 2.5 pro couldn't solve or even Gemini Code Assist. Gemini Code Assist is better than 2.5 pro, but has way more limits per prompt and often truncates output.
replies(5): >>45002136 #>>45002196 #>>45002217 #>>45002674 #>>45003344 #
3. baq ◴[] No.45002136[source]
I found Anthropic’s models untrustworthy with SQL (e.g. confused AND and OR operator precedence - or simply forgot to add parens, multiple times), Gemini 2.5 pro has no such issues and identified Claude’s mistakes correctly.
4. jonasft ◴[] No.45002196[source]
Let’s say that is correct, you can still just use Opus in Cursor or whatever.
5. rendx ◴[] No.45002217[source]
The article is not comparing models, but how the models are used by tools, in this case Claude Code. It's not merely a thin wrapper around an API.
6. faangguyindia ◴[] No.45002674[source]
for me gemini 2.5 pro with thinking tokens enabled blows Opus out of the water for "difficult problems".
7. d4rkp4ttern ◴[] No.45003344[source]
Don’t sleep on Codex-CLI + gpt-5. While the Codex-CLI scaffolding is far behind CC, the gpt-5 code seems solid from what I’ve seen (you can adjust thinking level using /model).
8. brookst ◴[] No.45004241[source]
I’ve been so into Claude code that I haven’t used cursor or copilot in vs code in a while.

Do they also allow you to view the thinking process and planning, and hit ESC to correct if it’s going down a wrong path? I’ve found that to be one of my favorite features of Claude code. If it says “ah, the the implementation isn’t complete, I’ll update test to use mocks” I can interrupt it and say no, it’s fine for the test to fail until the implementation is finished, so not mock anything. Etc.

It may be that I just discovered this after switching, but I don’t recall that being an interaction pattern on cursor or copilot. I was always having to revert after the fact (which might have been me not seeing the option).

replies(2): >>45005404 #>>45005414 #
9. WithinReason ◴[] No.45005404[source]
you can in VScode for about a month now
10. wrs ◴[] No.45005414[source]
Cursor does show the “thinking” in smaller greyer text, then hides it behind a small grey “thought for 30 seconds” note. If it’s off track, you just hit the stop button and correct the agent, or scroll up and restart from an earlier interaction (same thing as double-ESC in Claude Code).