Most active commenters
  • Shorel(3)
  • ZYbCRq22HbJ2y7(3)

←back to thread

627 points cratermoon | 15 comments | | HN request time: 0.658s | source | bottom
1. icameron ◴[] No.44461506[source]
Love this writing. One paragraph hit very close to home. I used to be the guy who could figure out obscure scripts by google-fu and rtfm and willpower. Now that skill has been completely obliterated by LLMs and everyone’s doing it- except it’s mostly whatever

> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.

replies(4): >>44461594 #>>44461713 #>>44462046 #>>44462411 #
2. N_Lens ◴[] No.44461594[source]
I use LLMs for coding everyday and agree with most of the article, even if it does attack me as an "indignant HackerNews mudpie commenter".

In the same vein, I've actually worked on crypto projects in both DeFi and NFT spaces, and agree with the "money for criminals" joke assessment of crypto, even if the technology is quite fascinating.

3. Shorel ◴[] No.44461713[source]
I am still the guy doing google-fu and rtfm.

The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.

Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.

LLMs lower the technical debt spread by the clueless, to a lower baseline.

The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.

replies(4): >>44462063 #>>44462104 #>>44462184 #>>44462633 #
4. ZYbCRq22HbJ2y7 ◴[] No.44462046[source]
No one is losing that skill, as LLMs are wrong a lot of the time.

No one is becoming a retard omniscient using LLMs and anyone saying they are is lying and pushing a narrative.

Humans still correct things, humans understand systems have flaws, and they can utilize them and correct them.

This is like saying someone used Word's grammar correction feature and accepted all the corrections. It doesn't make sense, and the people pushing the narrative are disingenuous.

replies(2): >>44462418 #>>44464153 #
5. darkwater ◴[] No.44462063[source]
> LLMs lower the technical debt spread by the clueless, to a lower baseline.

I'm SO stealing this!! <3

6. ZYbCRq22HbJ2y7 ◴[] No.44462104[source]
> LLMs lower the technical debt spread by the clueless, to a lower baseline.

Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.

Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.

Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?

replies(2): >>44462196 #>>44462336 #
7. sunrunner ◴[] No.44462184[source]
I always imagine that there's essentially a "knowledge debt" when doing almost any development today, unless you're operating at the lowest level (or you understand it all the way down, and there's also almost a level below).

The copy-paste of usable code snippets is somewhat comparable to any use of a library or framework in the sense that there's an element of not understanding what the entire thing is doing or at least how, and so every time this is done it adds to the knowledge debt, a borrowing of time, energy and understanding needed to come up with the thing being used.

By itself this isn't a problem and realistically it's impossible to avoid, and in a lot of cases you may never get to the point where you have to pay this back. But there's also a limit on the rate of debt accumulation which is how fast you can pull in libraries, code snippets and other abstractions, and as you said LLMs ability to just produce text at a superhuman rate potentially serves to _rapidly_ increase the rate of knowledge debt accumulation.

If debt as an economic force is seen as something that can stimulate short-term growth then there must be an equivalent for knowledge debt, a short-term increase in the ability of a person to create a _thing_ while trading off the long-term understanding of it.

replies(1): >>44462313 #
8. lmm ◴[] No.44462196{3}[source]
> Do you have no code that could use translation (move code that looks like this to code that looks like that)?

Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.

> they save dozens of hours on single sentence prompt tasks, even if you have to review them

Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.

replies(1): >>44462224 #
9. ZYbCRq22HbJ2y7 ◴[] No.44462224{4}[source]
> Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.

Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.

Also, these non-human entities we are discussing tend to output code very fast.

replies(1): >>44462801 #
10. Shorel ◴[] No.44462313{3}[source]
That's where documentation matters.

Take this snippet of code, and this is what each part means, and how you can change it.

It doesn't explain how it is implemented, but it explains the syntax and the semantics of it, and that's enough.

Good documentation makes all the difference, at least for me.

11. Shorel ◴[] No.44462336{3}[source]
A lower baseline of technical debt is a positive thing.

You don't want more technical debt.

Ideally, you want zero technical debt.

In practice only a hello world program has zero technical debt.

12. wiseowise ◴[] No.44462411[source]
> I used to be the guy who could figure out obscure scripts by google-fu and rtfm and willpower. Now that skill has been completely obliterated by LLMs and everyone’s doing it- except it’s mostly whatever

And thank fuck it happened. All of shell and obscure Unix tools that require brains molded in 80s to use on a day to day basis should’ve been superseded by something user friendly long time ago.

13. wiseowise ◴[] No.44462418[source]
> a retard omniscient

That’s a nice description, to be honest.

14. 8n4vidtmkvmk ◴[] No.44462633[source]
I have to frequently tell the LLM to RTFM because it's wrong. But I can usually paste the manual in which saves me some reading. It's scary because when it's wrong and you don't happen to know better... Then your code or whatever is just a little worse
15. lmm ◴[] No.44462801{5}[source]
> humans typically read way faster than they write

When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.

> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway

The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.