←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 3 comments | | HN request time: 0.718s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
AndyKelley ◴[] No.44568699[source]
You speak with a passive voice, as if the future is something that happens to you, rather than something that you participate in.
replies(7): >>44568718 #>>44568811 #>>44568842 #>>44568904 #>>44569270 #>>44569402 #>>44570058 #
stillpointlab ◴[] No.44568842[source]
There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.

This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.

replies(3): >>44568902 #>>44568942 #>>44569622 #
OtomotO ◴[] No.44568902[source]
The last tide being the blockchain (hype), which was supposed to solve all and everyone's problems about a decade ago already.

How come there even is anything left to solve for LLMs?

replies(1): >>44568944 #
dr_dshiv ◴[] No.44568944[source]
The difference between hype and reality is productivity—LLMs are productively used by hundreds of millions of people. Block chain is useful primarily in the imagination.

It’s just really not comparable.

replies(2): >>44568986 #>>44572282 #
OtomotO ◴[] No.44568986[source]
No, it's overinvestment.

And I don't see how most people are divided in two groups or appear to be.

Either it's total shit, or it's the holy cup of truth, here to solve all our problems.

It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail.

LLMs will NEVER become true AGI. But do they need to? No, or course not!

My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one.

But whatever. People like cults and anti-cults are cults too.

replies(5): >>44569047 #>>44569354 #>>44569366 #>>44569615 #>>44570167 #
1. ben_w ◴[] No.44569366[source]
I broadly agree with your point, but would also draw attention to something I've observed:

> LLMs will NEVER become true AGI. But do they need to? No, or course not!

Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately.

Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem".

replies(1): >>44570022 #
2. OtomotO ◴[] No.44570022[source]
Well, I go by the definition I was brought up with and am not interesting and redefining words all the time.

A true AGI is basically Skynet or the Basilisk ;-)

replies(1): >>44570660 #
3. ben_w ◴[] No.44570660[source]
Most of us are so; but if we're all using different definitions then no communication is possible.