←back to thread

413 points martinald | 1 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.46198601[source]
The cost of writing simple code has dropped 90%.

If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.

Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.

replies(17): >>46198698 #>>46198714 #>>46198740 #>>46198844 #>>46198931 #>>46198964 #>>46199323 #>>46199413 #>>46199922 #>>46199961 #>>46200723 #>>46200892 #>>46201013 #>>46202508 #>>46202780 #>>46202957 #>>46204213 #
loandbehold ◴[] No.46198714[source]
Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use. I find Claude Code in particular is great at grokking old code bases and making changes to it. I work on one of those old code bases and my productivity increased 10x mostly due to Claude Code's ability to research large code bases, make sense of it, answer questions and making careful surgical changes to it. It also helps with testing and debugging which is huge productivity boost. It's not about its ability to churn out lots of code quickly: it's an extra set of eyes/brain that works much faster that human developer.
replies(9): >>46198859 #>>46198917 #>>46200183 #>>46201563 #>>46202088 #>>46202652 #>>46204053 #>>46204144 #>>46204151 #
zmmmmm ◴[] No.46200183[source]
I've found this as well. In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" or "where would you look to resolve this bug" or "give me an overview of how this process works" is amazingly helpful.
replies(1): >>46200779 #
eru ◴[] No.46200779[source]
> In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" [...]

Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

Nitpicks aside, I agree that contemporary AIs can be great for quickly getting up to speed with a code base. Both a new library or language you want to be using, and your own organisation's legacy code.

One of the biggest advantages of using established ecosystem was that stack-overflow had a robust repository of already answered questions (and you could also buy books on it). With AI you can immediately cook up your own Stackoverflow community equivalent that provides answers promptly instead of closing your question as off-topic.

And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment. This will change gradually as the models become better. Just like Stackoverflow required less expertise to use than attending a university course. (And a university course requires less expertise than coming up with QuickSort in the first place.)

replies(5): >>46201198 #>>46201721 #>>46201763 #>>46203188 #>>46203334 #
colechristensen ◴[] No.46201721[source]
>not reliable enough to use blindly

I've been building things with Claude while looking at say less than 5% of the code it produces. What I've built are tools I want to use myself and... well they work. So somebody can say that I can't do it, but on the other hand I've wanted to build several kinds of ducks and what I've built look like ducks and quack like ducks so...

I've found it's a lot better at evaluating code than producing it so what you do is tell it to write some code, then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important. That is a much different flow than going on an expedition to find a SO solution to an obscure problem.

A good quality metric of your code is to ask an LLM to find the ten worst things about it and if all of those are all stupid then your code is pretty good. I did this recently on a codebase and it's number 1 complaint was that the name I had chosen was stupid and confusing (which it was, I'm not explaining the joke to a computer) and that was my sign that it was done finding problems and time to move on.

replies(2): >>46202799 #>>46204025 #
impjohn ◴[] No.46204025[source]
>then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important.

I would be cautious of this. I've tried this multiple times and often it produces very subtle bugs. Sometimes the code is not bad enough to have 5 defects with it, but it will comply, and change things that don't need to. You will find out in prod at some point.

replies(1): >>46208655 #
1. colechristensen ◴[] No.46208655[source]
To be clear, I'm instructing it to generate a list of issues for me. I then decide if anything on that list is worth fixing (or is an issue at all, etc.)