Most active commenters

    ←back to thread

    413 points martinald | 11 comments | | HN request time: 0s | source | bottom
    Show context
    simonw ◴[] No.46198601[source]
    The cost of writing simple code has dropped 90%.

    If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.

    Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.

    replies(17): >>46198698 #>>46198714 #>>46198740 #>>46198844 #>>46198931 #>>46198964 #>>46199323 #>>46199413 #>>46199922 #>>46199961 #>>46200723 #>>46200892 #>>46201013 #>>46202508 #>>46202780 #>>46202957 #>>46204213 #
    loandbehold ◴[] No.46198714[source]
    Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use. I find Claude Code in particular is great at grokking old code bases and making changes to it. I work on one of those old code bases and my productivity increased 10x mostly due to Claude Code's ability to research large code bases, make sense of it, answer questions and making careful surgical changes to it. It also helps with testing and debugging which is huge productivity boost. It's not about its ability to churn out lots of code quickly: it's an extra set of eyes/brain that works much faster that human developer.
    replies(9): >>46198859 #>>46198917 #>>46200183 #>>46201563 #>>46202088 #>>46202652 #>>46204053 #>>46204144 #>>46204151 #
    zmmmmm ◴[] No.46200183[source]
    I've found this as well. In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" or "where would you look to resolve this bug" or "give me an overview of how this process works" is amazingly helpful.
    replies(1): >>46200779 #
    1. eru ◴[] No.46200779[source]
    > In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" [...]

    Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

    Nitpicks aside, I agree that contemporary AIs can be great for quickly getting up to speed with a code base. Both a new library or language you want to be using, and your own organisation's legacy code.

    One of the biggest advantages of using established ecosystem was that stack-overflow had a robust repository of already answered questions (and you could also buy books on it). With AI you can immediately cook up your own Stackoverflow community equivalent that provides answers promptly instead of closing your question as off-topic.

    And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment. This will change gradually as the models become better. Just like Stackoverflow required less expertise to use than attending a university course. (And a university course requires less expertise than coming up with QuickSort in the first place.)

    replies(5): >>46201198 #>>46201721 #>>46201763 #>>46203188 #>>46203334 #
    2. ChrisMarshallNY ◴[] No.46201198[source]
    > Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

    Not in my case (I never used SO like that, anyway). I use it almost exactly like SO, except much more quickly and interactively (and without the inference that I’m “lazy,” or “stupid,” for not already knowing the answer).

    I have found that ChatGPT gives me better code than Claude (I write Swift); even learning my coding and documentation style.

    I still need to review all the code it gives me, and I have yet to use it verbatim, but it’s getting close.

    The most valuable thing, is I get an error, and I can ask it “Here’s the symptoms and the code. What do you think is going on?”. It usually gives me a good starting point.

    I could definitely figure it out on my own, but it might take half an hour. ChatGPT will give me a solid lead in about half a minute.

    3. colechristensen ◴[] No.46201721[source]
    >not reliable enough to use blindly

    I've been building things with Claude while looking at say less than 5% of the code it produces. What I've built are tools I want to use myself and... well they work. So somebody can say that I can't do it, but on the other hand I've wanted to build several kinds of ducks and what I've built look like ducks and quack like ducks so...

    I've found it's a lot better at evaluating code than producing it so what you do is tell it to write some code, then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important. That is a much different flow than going on an expedition to find a SO solution to an obscure problem.

    A good quality metric of your code is to ask an LLM to find the ten worst things about it and if all of those are all stupid then your code is pretty good. I did this recently on a codebase and it's number 1 complaint was that the name I had chosen was stupid and confusing (which it was, I'm not explaining the joke to a computer) and that was my sign that it was done finding problems and time to move on.

    replies(2): >>46202799 #>>46204025 #
    4. jordanbeiber ◴[] No.46201763[source]
    The problem is most likely not writing the actual code, but rather understanding an old, fairly large codebase and how it’s stitched together.

    SO is (was?) great when you where thinking about how nice a recursive reduce function could replace the mess you’ve just cobbled together, but language x just didn’t yet flow naturally for you.

    5. ◴[] No.46202799[source]
    6. coldtea ◴[] No.46203188[source]
    >Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

    Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!

    But in general, the LLM answer you appear to dismiss amounts to a lot more:

      Having an close-to-good-human-level programmer 
      understand your existing codebase
      answer questions about your existing codebase 
      answer questions about changes you want to make
      on demand (not confined to copying SO answers)
      interactively 
      and even being able to go in and make the changes
    
    That amounts to "manually typing an SO answer" about as much as a pickup truck amounts to a horse carriage.

    Or, to put it another way, isn't "the logical endpoint" of hiring another programmer and asking them to fix X "equivalent to printing out a Stackoverflow answer and manually typing it into their computer"?

    >And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment.

    Well, we shouldn't be using either blindly anyway. Not even the input of another human programmer (that's way we do PR reviews).

    replies(1): >>46204888 #
    7. bryanrasmussen ◴[] No.46203334[source]
    >Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

    when AI works well it is superior to Stack Overflow because what it replaces is not "Look up answer on SO, copy paste" but rather, look up several different things on SO that relate to your problem that you are trying to solve but which there is no exact definite solution posted anywhere, and copy those things together into a bit of code that you will probably just refactor a bit with a shorter time than doing all the SO look up yourself. When it works it can turn 2 hours of research into 2 minutes.

    The problems are:

    AI also sometimes replicates the following process - dev not understanding all parts of solution or requirements copies bits of code together from various answers making something that sort of works but is inefficient and has underlying problems.

    Even with the working correctly solution your developer does not get in that 2 minutes what they used to get in the two hours before, an understanding of the problem space and how these parts of the solution hang together. This is the reason why it is more useful for seniors than juniors, because part of the looking through SO for what you want is light education.

    8. impjohn ◴[] No.46204025[source]
    >then tell it to give you the top 10 things wrong with the code, then tell it to fix the five of them that are valid and important.

    I would be cautious of this. I've tried this multiple times and often it produces very subtle bugs. Sometimes the code is not bad enough to have 5 defects with it, but it will comply, and change things that don't need to. You will find out in prod at some point.

    replies(1): >>46208655 #
    9. dns_snek ◴[] No.46204888[source]
    > Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!

    The word "merely" is doing all of the heavy lifting here. Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.

    replies(1): >>46217648 #
    10. colechristensen ◴[] No.46208655{3}[source]
    To be clear, I'm instructing it to generate a list of issues for me. I then decide if anything on that list is worth fixing (or is an issue at all, etc.)
    11. coldtea ◴[] No.46217648{3}[source]
    >The word "merely" is doing all of the heavy lifting here"

    That's not some new claim that I made. My answer accepts the premise already asserted by the parent that LLM is "equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting".

    My point: if that's true, and LLM is "merely that", then that's already valuable.

    >Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.*

    Well, what we actually have is a machine that even without AGI, it does have more practical intelligence than to merely produce garbage.

    A machine which programmers than run circles around you and me still use, and find it produces acceptable code, fit for the purpose, and possible to get it to fix any potential initial issues in a first iteration it gives, too.

    If it merely produced garbage or was no better than random chance we wouldn't have this discussion.