←back to thread

631 points cratermoon | 3 comments | | HN request time: 0.748s | source
Show context
icameron ◴[] No.44461506[source]
Love this writing. One paragraph hit very close to home. I used to be the guy who could figure out obscure scripts by google-fu and rtfm and willpower. Now that skill has been completely obliterated by LLMs and everyone’s doing it- except it’s mostly whatever

> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.

replies(4): >>44461594 #>>44461713 #>>44462046 #>>44462411 #
Shorel ◴[] No.44461713[source]
I am still the guy doing google-fu and rtfm.

The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.

Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.

LLMs lower the technical debt spread by the clueless, to a lower baseline.

The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.

replies(4): >>44462063 #>>44462104 #>>44462184 #>>44462633 #
ZYbCRq22HbJ2y7 ◴[] No.44462104[source]
> LLMs lower the technical debt spread by the clueless, to a lower baseline.

Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.

Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.

Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?

replies(2): >>44462196 #>>44462336 #
1. lmm ◴[] No.44462196[source]
> Do you have no code that could use translation (move code that looks like this to code that looks like that)?

Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.

> they save dozens of hours on single sentence prompt tasks, even if you have to review them

Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.

replies(1): >>44462224 #
2. ZYbCRq22HbJ2y7 ◴[] No.44462224[source]
> Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.

Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.

Also, these non-human entities we are discussing tend to output code very fast.

replies(1): >>44462801 #
3. lmm ◴[] No.44462801[source]
> humans typically read way faster than they write

When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.

> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway

The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.