←back to thread

677 points meetpateltech | 1 comments | | HN request time: 0.216s | source
Show context
Syzygies ◴[] No.45125257[source]
This interests me but they don't address practical Claude Code Opus 4.1 use at scale.

I have a $200/month Anthropic Max subscription that I use for help in exploring and coding my math research. As of now no AI model can compete with Opus 4.1 for helping me with my most challenging tasks. I try every one I can. Gemini 2.5 Pro is great for code review and a second opinion, but drives off the road when it takes the wheel.

I tried a $100/monthly plan and spent $20 in an hour the first time I went over; an API key is not a practical way to use Opus 4.1.

There are plenty of concerns using Clause Code in a terminal, that Zed could address. Mainly, I can't "see over AI's shoulder" so I need to also test. The most careful extension I coded was terminal sessions we could share as equal participants. Nevertheless, as a rule I'd attribute my relative success to just living with shortcomings, as if a "partner that snores". AI loses track of the current directory all the time, or forgets my variable naming and comment conventions? Just keep going, fix it later.

How can I get equivalent value to my Max plan, using Claude Code Opus 4.1 with Zed?

replies(3): >>45125301 #>>45125410 #>>45125460 #
1. m13rar ◴[] No.45125301[source]
I use zed and claude code side by side right now. I haven't tried out the newly released assisted agent mode with Zed.

Yes Opus has been good with instruction following and same with Gemini for 2nd opinions and brainstorming.

They're not perfect but definitely I see plenty of value in both tools as far as they are reliable services.

I don't like the cloud based functioning of the models as the experience is extremely flaky and not reliable. I've gound OpenAI Codex and the models in codex too be more reliable in responses and consistency of the quality service.

I would still prefer to have a fully locally hosted equivalent of what ever the state of the art coding assisstant models to speed up work.

That will take time though as in with every technological evolution. We will be stuck with time sharing for sometime haha. Until the resource aspect of this technology scales and economizes to become ubiquitous.