Most active commenters
  • OtomotO(4)
  • dr_dshiv(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 13 comments | | HN request time: 0.001s | source | bottom
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
AndyKelley ◴[] No.44568699[source]
You speak with a passive voice, as if the future is something that happens to you, rather than something that you participate in.
replies(7): >>44568718 #>>44568811 #>>44568842 #>>44568904 #>>44569270 #>>44569402 #>>44570058 #
stillpointlab ◴[] No.44568842[source]
There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.

This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.

replies(3): >>44568902 #>>44568942 #>>44569622 #
1. OtomotO ◴[] No.44568902[source]
The last tide being the blockchain (hype), which was supposed to solve all and everyone's problems about a decade ago already.

How come there even is anything left to solve for LLMs?

replies(1): >>44568944 #
2. dr_dshiv ◴[] No.44568944[source]
The difference between hype and reality is productivity—LLMs are productively used by hundreds of millions of people. Block chain is useful primarily in the imagination.

It’s just really not comparable.

replies(2): >>44568986 #>>44572282 #
3. OtomotO ◴[] No.44568986[source]
No, it's overinvestment.

And I don't see how most people are divided in two groups or appear to be.

Either it's total shit, or it's the holy cup of truth, here to solve all our problems.

It's neither. It's a tool. Like a shovel, it's good at something. And like a shovel it's bad at other things. E.g. I wouldn't use a shovel to hammer in a nail.

LLMs will NEVER become true AGI. But do they need to? No, or course not!

My biggest problem with LLMs isn't the shit code they produce from time to time, as I am paid to resolve messes, it's the environmental impact of MINDLESSLY using one.

But whatever. People like cults and anti-cults are cults too.

replies(5): >>44569047 #>>44569354 #>>44569366 #>>44569615 #>>44570167 #
4. TeMPOraL ◴[] No.44569047{3}[source]
There are two different groups with different perspectives and relationships to the "AI hype"; I think we're talking in circles in this subthread because we're talking about different people.

See https://news.ycombinator.com/item?id=44208831. Quoting myself (sorry):

> For me, one of the Beneficiaries, the hype seems totally warranted. The capability is there, the possibilities are enormous, pace of advancement is staggering, and achieving them is realistic. If it takes a few years longer than the Investor group thinks - that's fine with us; it's only a problem for them.

5. dr_dshiv ◴[] No.44569354{3}[source]
Your concern is the environmental impact? Why pick on LLMs vs Amazon or your local drug store? Or a local restaurant, for that matter?

Do the calculations for how much LLM use is required to equal one hamburger worth of CO2 — or the CO2 of commuting to work in a car.

If my daily LLM environmental impact is comparable to my lunch or going to work, it’s really hard to fault, IMO. They aren’t building data centers in the rainforest.

replies(1): >>44570009 #
6. ben_w ◴[] No.44569366{3}[source]
I broadly agree with your point, but would also draw attention to something I've observed:

> LLMs will NEVER become true AGI. But do they need to? No, or course not!

Everyone disagrees about the meaning of each of the three letters of the initialism "AGI", and also disagree about the compound whole and often argue it means something different than the simple meaning of those words separately.

Even on this website, "AGI" means anything from "InstructGPT" (the precursor to ChatGPT) to "Biblical God" — or, even worse than "God" given this is a tech forum, "can solve provably impossible task such as the halting problem".

replies(1): >>44570022 #
7. modo_mario ◴[] No.44569615{3}[source]
> it's the environmental impact of MINDLESSLY using one.

Isn't much of that environmental impact currently from the training of the model rather than the usage? Something you could arguably one day just stop doing if you're satisfied with the progress on that front (People won't be any time soon admittedly)

I'm no expert on this front. It's a genuine question based on what i've heard and read.

8. OtomotO ◴[] No.44570009{4}[source]
Why do you assume I am not concerned about the other sources of environmental impact?

Of course I don't go around posting everything I am concerned about when we are talking about a specific topic.

You're aware tho, that because of the AI hype sustainability programs were cut at all major tech firms?

replies(1): >>44572034 #
9. OtomotO ◴[] No.44570022{4}[source]
Well, I go by the definition I was brought up with and am not interesting and redefining words all the time.

A true AGI is basically Skynet or the Basilisk ;-)

replies(1): >>44570660 #
10. blackoil ◴[] No.44570167{3}[source]
Overinvestment isn't a bug. It is a feature of capitalism. When the dust settles there'll be few trillion-dollar pots and 100s of billion are being spent to get one of them.

Environmental impacts of GenAI/LLM ecosystem are highly overrated.

11. ben_w ◴[] No.44570660{5}[source]
Most of us are so; but if we're all using different definitions then no communication is possible.
12. dr_dshiv ◴[] No.44572034{5}[source]
It also correlated with the discovery that voluntary carbon credits weren’t sufficient for their environmental marketing.

If carbon credits were viewed as valid, I’m pretty sure they would have kept the programs.

13. immibis ◴[] No.44572282[source]
> productively used

This chart is extremely damning: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

The industry consistently predicts people will do the task quicker with AI. The people who are doing the task predict they'll do it quicker if they can use AI. After doing the task with AI, they predict they did it quicker because they used AI. People who did it without AI predict they could have done it quicker with AI. But they actually measured how long it takes. It turns out, they do it slower if they use AI. This is damning.

It's a dopamine machine. It makes you feel good, but with no reality behind it and no work to achieve it. It's no different in this regard from (some) hard drugs. A rat with a lever wired to the pleasure center in its brain keeps pressing that lever until it dies of starvation.

(Yes, it's very surprising that you can create this effect without putting chemicals or electrodes in your brain. Social media achieved it first, though.)