←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 1 comments | | HN request time: 0.218s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
baxuz ◴[] No.44568473[source]
The thing is that the data from actual research doesn't support your anecdotal proof of quality:

- https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

- https://www.theregister.com/2025/06/29/ai_agents_fail_a_lot/

But more importantly, it makes you stupid:

- https://www.404media.co/microsoft-study-finds-ai-makes-human...

- https://archive.is/M3lCG

And it's an unsustainable bubble and wishful thinking, much like crypto:

- https://dmitriid.com/everything-around-llms-is-still-magical...

So while it may be a fun toy for senior devs that know what to look for, it actually makes them slower and stupider, making them progressively less capable to do their job and apply critical thinking skills.

And as for juniors — they should steer clear from AI tools as they can't assess the quality of the output, they learn nothing, and they also get critical thinking skills impaired.

So with that in mind — Who is the product (LLM coding tools) actually for, and what is its purpose?

I'm not even going into the moral, ethical, legal, social and ecological implications of offloading your critical thinking skills to a mega-corporation, which can only end up like https://youtu.be/LXzJR7K0wK0

replies(9): >>44568563 #>>44568567 #>>44568615 #>>44568640 #>>44568652 #>>44568736 #>>44568765 #>>44569001 #>>44576035 #
handoflixue ◴[] No.44568765[source]
"But more importantly, it makes you stupid:"

I don't think it was your intent, but that reads out as a seriously uncalled for attack - you might want to work on your phrasing. Hacker News rules are pretty clear on civility being an important virtue.

replies(2): >>44568815 #>>44571097 #
1. baxuz ◴[] No.44571097[source]
I didn't target the author, and I used the terminology used in the article heading