←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 2 comments | | HN request time: 0.485s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
shaky-carrousel ◴[] No.44569732[source]
Hours of time saved, and you learned nothing in the process. You are slowly becoming a cog in the LLM process instead of an autonomous programmer. You are losing autonomy and depending more and more on external companies. And one day will come that, with all that power, they'll set whatever price or conditions they want. And you will accept. That's the future. And it's not inevitable.
replies(4): >>44569772 #>>44569796 #>>44569813 #>>44569871 #
baxtr ◴[] No.44569871[source]
Did you build the house you live in? Did you weave your own clothes or grow your own food?

We all depend on systems others built. Determining when that trade-off is worthwhile and recognizing when convenience turns into dependence are crucial.

replies(2): >>44569942 #>>44570007 #
shaky-carrousel ◴[] No.44569942[source]
Did you write your own letters? Did you write your own arguments? Did you write your own code? I do, and don't depend on systems other built to do so. And losing the ability of keep doing so is a pretty big trade-off, in my opinion.
replies(3): >>44570262 #>>44570733 #>>44571102 #
djray ◴[] No.44570262{3}[source]
There seems to be a mistaken thought that having an AI (or indeed someone else) help you achieve a task means you aren't learning anything. This is reductionist. I suggest instead that it's about degrees of autonomy. The person you're responding to made a choice to get the AI to help integrate a library. They chose NOT to have the AI edit the files itself; they rather spent time reading through the changes and understanding the integration points, and tweaking the code to make it their own. This is much different to vibe coding.

I do a similar loop with my use of AI - I will upload code to Gemini 2.5 Pro, talk through options and assumptions, and maybe get it to write some or all of the next step, or to try out different approaches to a refactor. Integrating any code back into the original source is never copy-and-paste, and that's where the learning is. For example, I added Dexie (a library/wrapper for accessing IndexedDB) to a browser extension project the other day, and the AI helped me get started with a minimal amount of initial knowledge, yet I learned a lot about Dexie and have been able to expand upon the code myself since. If I were on my own, I would probably have barrelled ahead and just used IndexedDB directly, resulting in a lot more boilerplate code and time spent doing busywork. It's this sort of friction reduction that I find most liberating about AI. Trying out a new library isn't a multi-hour slog; instead, you can sample it and possibly reject it as unsuitable almost immediately without having to waste a lot of time on R&D. In my case, I didn't learn 'raw' IndexedDB, but instead I got the job done with a library offering a more suitable level of abstraction, and saved hours in the process.

This isn't lazy or giving up the opportunity to learn, it's simply optimising your time.

The "not invented here" syndrome is something I kindly suggest you examine, as you may find you are actually limiting your own innovation by rejecting everything that you can't do yourself.

replies(2): >>44570358 #>>44570987 #
shaky-carrousel ◴[] No.44570358[source]
It's not reductionist, it's a fact. If you, instead of learning Python, ask an LLM to code you something in Python, you won't learn a line of Python in the process. Even if you read the produced code from beginning to end. Because (and honestly I'm surprised I have to point out this, here of all places) you learn by writing code, not by reading code.
replies(1): >>44570689 #
1. rybosome ◴[] No.44570689[source]
I encourage you to try this yourself and see how you feel.

Recently I used an LLM to help me build a small application in Rust, having never used it before (though I had a few years of high performance C++ experience).

The LLM wrote most of the code, but it was no more than ~100 lines at a time, then I’d tweak, insert, commit, plan the next feature. I hand-wrote very little, but I was extremely involved in the design and layout of the app.

Without question, I learned a lot about Rust. I used tokio’s async runtime, their mpsc channels, and streams to make a high performance crawler that worked really well for my use case.

If I needed to write Rust without an LLM now, I believe I could do it - though it would be slower and harder.

There’s definitely a “turn my brain off and LLM for me” way to use these tools, but it is reductive to state that ALL usage of such tools is like this.

replies(1): >>44570761 #
2. shaky-carrousel ◴[] No.44570761[source]
Of course you have learned a lot about rust. What you haven't learned is to program in rust. Try, a month from now, to write that application in rust from scratch, without any LLM help. If you can, then you truly learned to program in rust. If you don't, then what you learned is just generic trivia about rust.