←back to thread

179 points articsputnik | 1 comments | | HN request time: 0s | source
Show context
jjallen ◴[] No.45054771[source]
I have gone from using Claude Code all day long since the day it was launched to only using the separate Claude app. In my mind that is a nice balance of using it, but not too much, not too fast.

there is the temptation to just let these things run in our codebases, which I think for some projects is totally fine. For most websites I think this would usually be fine. This is for two reasons: 1) these models have been trained on more websites than probably anything else and 2) if a div/text is off by a little bit then usually there will be no huge problems.

But if you're building something that is mission critical, unless you go super slowly, which again is hard to do because these agents are tempting to go super fast. That is sort of the allure of them: to be able to write sofware super fast.

But as we all know, in some programs you cannot have a single char wrong or the whole program may not work or have value. At least that is how the one I am working on is.

I found that I lost the mental map of the codebase I am working on. Claude Code had done too much too fast.

I found a function this morning to validate futures/stocks/FUT-OPT/STK-OPT symbols where the validation was super basic and terrible that it had written. We had implemented some very strong actual symbol data validation a week or two ago. But that wasn't fully implemented everywhere. So now I need to go back and do this.

Anyways, I think finding where certain code is written would be helpful for sure and suggesting various ways to solve problems. But the separate GUI apps can do that for us.

So for now I am going to keep just using the separate LLM apps. I will also save lots of money in the meantime (which I would gladly spend for a higher quality Claude Code ish setup).

replies(4): >>45054878 #>>45054891 #>>45055009 #>>45059659 #
simianwords ◴[] No.45054891[source]
The reality is that you can't have AI do too much for you or else you completely lose track of what is happening. I find it useful to let it do small stupid things and use it for brainstorming.

I don't like it to do complete PR's that span multiple files.

replies(1): >>45059151 #
tharkun__ ◴[] No.45059151[source]
I don't think the "complete PR spanning multiple files" is an issue actually.

I think the issue is if you don't yourself understand what it's doing. If all you do is to tell it what the outcome should be from a user's perspective, you check that that's what it does and then you just merge. Then you have a problem.

But if you just use it to be faster at getting the code you would've liked to write yourself, or make it write the code you'd have written if you had bothered to do that boring thing you know needs to be done but never bothered to do, then it's actually a great tool.

I think in that case it's like IDE based refactorings enabled by well typed languages. Way back in the day, there were refactorings that were a royal pain in the butt to do in our Perl code base. I did a lot of them but they weren't fun. Very simple renames or function extractions that help code readability just aren't done if you have to do them manually. If you can tell an IDE to do a rename and you're guaranteed that nothing breaks, it's simply a no brainer. Anyone not doing it is simply a bad developer if you ask me.

There's a lot of copy and paste coding going on in "business software". And that's fine. I engage in that too, all the time. You have a blueprint of how to do something in your code base. You just need to do something similar "over there". So you know where to find the thing to copy and paste and then adjust. The AI can do it for you even faster especially if you already know what to tell it to copy. And in some cases all you need to know is that there's something to copy and not from where exactly and it'll be able to copy it very nicely for you.

And the resulting PR that does span multiple files is totally fine. You just came up with it faster than you ever could've. Personally I skipped all the "Copilot being a better autocomplete" days and went straight into agentic workflows - with Claude Code to be specific. Using it from within IntelliJ in a monorepo that I know a lot about already. It's really awesome actually.

The funny thing is that at least in my experience, the people that are slower than you doing any of this manually are not gonna be good at this with AI either. You're still gonna be better and faster at using this new tool than they were at using the previously available tools.

replies(2): >>45059275 #>>45063108 #
1. TheCapeGreek ◴[] No.45063108[source]
Fully agreed.

In my view, effective coding agent use boils down to being good at writing briefs as you would for any ticket. The better formatting, detail, and context you can provide BOTH on an outcome level and a technical architecture level, the better your results are.

To put it another way: If before LLMs came along you were someone who (purposely or otherwise) became good at writing for documentation and briefing tickets for your team, I think there's a decent chance you're going further with these agentic tools than others that just shove an idea into it and hope for the best.