←back to thread

413 points martinald | 1 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.46198601[source]
The cost of writing simple code has dropped 90%.

If you can reduce a problem to a point where it can be solved by simple code you can get the rest of the solution very quickly.

Reducing a problem to a point where it can be solved with simple code takes a lot of skill and experience and is generally still quite a time-consuming process.

replies(17): >>46198698 #>>46198714 #>>46198740 #>>46198844 #>>46198931 #>>46198964 #>>46199323 #>>46199413 #>>46199922 #>>46199961 #>>46200723 #>>46200892 #>>46201013 #>>46202508 #>>46202780 #>>46202957 #>>46204213 #
loandbehold ◴[] No.46198714[source]
Most of software work is maintaining "legacy" code, that is older systems that have been around for a long time and get a lot of use. I find Claude Code in particular is great at grokking old code bases and making changes to it. I work on one of those old code bases and my productivity increased 10x mostly due to Claude Code's ability to research large code bases, make sense of it, answer questions and making careful surgical changes to it. It also helps with testing and debugging which is huge productivity boost. It's not about its ability to churn out lots of code quickly: it's an extra set of eyes/brain that works much faster that human developer.
replies(9): >>46198859 #>>46198917 #>>46200183 #>>46201563 #>>46202088 #>>46202652 #>>46204053 #>>46204144 #>>46204151 #
zmmmmm ◴[] No.46200183[source]
I've found this as well. In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" or "where would you look to resolve this bug" or "give me an overview of how this process works" is amazingly helpful.
replies(1): >>46200779 #
eru ◴[] No.46200779[source]
> In some cases we aren't fully authorised to use the AI tools for actual coding but even just asking "how would you make this change" [...]

Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

Nitpicks aside, I agree that contemporary AIs can be great for quickly getting up to speed with a code base. Both a new library or language you want to be using, and your own organisation's legacy code.

One of the biggest advantages of using established ecosystem was that stack-overflow had a robust repository of already answered questions (and you could also buy books on it). With AI you can immediately cook up your own Stackoverflow community equivalent that provides answers promptly instead of closing your question as off-topic.

And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment. This will change gradually as the models become better. Just like Stackoverflow required less expertise to use than attending a university course. (And a university course requires less expertise than coming up with QuickSort in the first place.)

replies(5): >>46201198 #>>46201721 #>>46201763 #>>46203188 #>>46203334 #
coldtea ◴[] No.46203188{3}[source]
>Isn't the logical endpoint of this equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting?

Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!

But in general, the LLM answer you appear to dismiss amounts to a lot more:

  Having an close-to-good-human-level programmer 
  understand your existing codebase
  answer questions about your existing codebase 
  answer questions about changes you want to make
  on demand (not confined to copying SO answers)
  interactively 
  and even being able to go in and make the changes
That amounts to "manually typing an SO answer" about as much as a pickup truck amounts to a horse carriage.

Or, to put it another way, isn't "the logical endpoint" of hiring another programmer and asking them to fix X "equivalent to printing out a Stackoverflow answer and manually typing it into their computer"?

>And I pick Stackoverflow deliberately: it's a great resources, but not reliable enough to use blindly. I feel we are in a similar situation with AI at the moment.

Well, we shouldn't be using either blindly anyway. Not even the input of another human programmer (that's way we do PR reviews).

replies(1): >>46204888 #
dns_snek ◴[] No.46204888{4}[source]
> Isn't the answer on SO the result of a human intelligence writing it in the first place, and then voted by several human intelligencies to top place? If an LLM was merely an automated "equivalent" to that, that's already a good thing!

The word "merely" is doing all of the heavy lifting here. Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.

replies(1): >>46217648 #
1. coldtea ◴[] No.46217648{5}[source]
>The word "merely" is doing all of the heavy lifting here"

That's not some new claim that I made. My answer accepts the premise already asserted by the parent that LLM is "equivalent to printing out a Stackoverflow answer and manually typing it into your computer instead of copy-and-pasting".

My point: if that's true, and LLM is "merely that", then that's already valuable.

>Having human intelligence in the loop providing and evaluating answers is what made it valuable. Without that intelligence you just have a machine that mimics the process yet produces garbage.*

Well, what we actually have is a machine that even without AGI, it does have more practical intelligence than to merely produce garbage.

A machine which programmers than run circles around you and me still use, and find it produces acceptable code, fit for the purpose, and possible to get it to fix any potential initial issues in a first iteration it gives, too.

If it merely produced garbage or was no better than random chance we wouldn't have this discussion.