←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
chadcmulligan ◴[] No.44568468[source]
Any code thats easy to define and tedious I just get AI's to do it now, and its awesome. Saves me so much work, though you have to read the code, it still puts in odd stuff sometimes.
replies(1): >>44568650 #
cmdli ◴[] No.44568650[source]
How much of the code you are writing is tedious? If its a significant amount, the framework you are using could use some improvement.
replies(6): >>44568708 #>>44568722 #>>44568724 #>>44568891 #>>44568956 #>>44570562 #
1. Karrot_Kream ◴[] No.44568708[source]
Maybe?

In some cases, definitely. Then good luck making the business case to improve the framework or swap and refactor around a different framework. (Or you can do what I do during the more motivated/less busy times in my life: find undisturbed unpaid time to do it for your team.)

In other cases improving the framework comes at the cost of some magic that may obscure the intent of the code.

The nice thing about LLM code is that it's code. You're not monkey patching a method. You're not subtly changing the behavior of a built-in. You're not adding a build step (though one can argue that LLM generated code is akin to a separate build step.) You're just checking in code. Other contributors can just read the code.