←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
pron ◴[] No.44570129[source]
> This is the future. It is inevitable.

"This" does a lot of unjustifiable work here. "This" refers to your successful experience which, I assume, involved a program no larger than a few tens of thousands lines of code, if that, and it saved you only a few hours of work. The future you're referring to, however, is an extrapolation of "this", where a program writes arbitrary programs for us. Is that future inevitable? Possibly, but it's not quite "this", as we can't yet do that, we don't know when we'll be able to, and we don't know that LLMs are what gets us there.

But If we're extrapolating from relatively minor things we can do today to big things we could do in the future, I would say that you're thinking too small. If program X could write program Y for us, for some arbitrary Y, why would we want Y in the first place? If we're dreaming about what may be possible, why would we need any program at all other than X? Saying that that is the inevitable future sounds to me like someone, at the advent of machines, saying that a future where machines automatically clean the streets after our horses is the inevitable future, or perhaps one where we're carried everywhere on conveyor belts. Focusing on LLMs is like such a person saying that in the future, everything will inevitably be powered by steam engines. In the end, horses were replaced wholesale, but not by conveyor belts, and while automation carried on, it wasn't the steam engine that powered most of it.

replies(3): >>44570230 #>>44570260 #>>44570700 #
roxolotl ◴[] No.44570260{3}[source]
Absolutely couldn’t agree more. Incredibly useful tools are, in fact, incredibly useful. These discussions get clouded though when we intentionally ignore what’s being said by those doing the investing. The inevitability here isn’t that they’ll save 30% of dev time and we’ll get better software with less employees. It’s that come 2030, hell there’s that 2027 paper even, LLMs will be more effective than people at most tasks. Maybe at some point that’ll happen but looking at other normal technology[0] it takes decades.

0: https://knightcolumbia.org/content/ai-as-normal-technology

replies(1): >>44570698 #
loudmax ◴[] No.44570698{4}[source]
Looking at the rollout of the internet, it did take decades. There was a lot of nonsensical hype in the dotcom era, most famously pets.com taking out an ad during the Superbowl. Most of those companies burned through their VC and went out of business. Yet here we are today. It's totally normal to get your pet food from chewy.com and modern life without the internet is unimaginable.

Today we see a clear path toward machines that can take on most of the intellectual labor that humans do. Scott Alexander's 2027 time frame seems optimistic (or pessimistic, depending on how you feel about the outcome). But by say 2037? The only way that vision of the future doesn't come true is economic collapse that puts us back to 20th century technology. Focusing on whether the technology is LLMs or diffusion models or whatever is splitting hairs.

replies(1): >>44571478 #
1. roxolotl ◴[] No.44571478{5}[source]
Timelines are relevant though. Inevitability is only a useful proposition if the timeline is constrained. It is inevitable that the earth will be swallowed by the sun but rightfully no one gives a shit. I think most people, even the author of this piece, aside from those who believe there's something fundamental about human intelligence that isn't reproducible, would say AI is inevitable on a long enough timeline. The arguments being made though are that AI is inevitable in the short term. Is 12 years short term? Maybe?

Regardless though when we break down the timelines we start to enable useful conversations. It's one thing to argue with a frame of "over X period of time Y will happen". It's another to say "it's inevitable so get on board". This piece, myself, and many others are frustrated by the latter.