←back to thread

LLM Inevitabilism

(tomrenner.com)
1681 points SwoopsFromAbove | 2 comments | | HN request time: 0s | source
Show context
mg ◴[] No.44568158[source]
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

replies(58): >>44568182 #>>44568188 #>>44568190 #>>44568192 #>>44568320 #>>44568350 #>>44568360 #>>44568380 #>>44568449 #>>44568468 #>>44568473 #>>44568515 #>>44568537 #>>44568578 #>>44568699 #>>44568746 #>>44568760 #>>44568767 #>>44568791 #>>44568805 #>>44568823 #>>44568844 #>>44568871 #>>44568887 #>>44568901 #>>44568927 #>>44569007 #>>44569010 #>>44569128 #>>44569134 #>>44569145 #>>44569203 #>>44569303 #>>44569320 #>>44569347 #>>44569391 #>>44569396 #>>44569574 #>>44569581 #>>44569584 #>>44569621 #>>44569732 #>>44569761 #>>44569803 #>>44569903 #>>44570005 #>>44570024 #>>44570069 #>>44570120 #>>44570129 #>>44570365 #>>44570482 #>>44570537 #>>44570585 #>>44570642 #>>44570674 #>>44572113 #>>44574176 #
AndyKelley ◴[] No.44568699[source]
You speak with a passive voice, as if the future is something that happens to you, rather than something that you participate in.
replies(7): >>44568718 #>>44568811 #>>44568842 #>>44568904 #>>44569270 #>>44569402 #>>44570058 #
stillpointlab ◴[] No.44568842[source]
There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.

This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.

replies(3): >>44568902 #>>44568942 #>>44569622 #
FeepingCreature ◴[] No.44568942[source]
Reminder that the Dutch exist.
replies(2): >>44569000 #>>44569206 #
stillpointlab ◴[] No.44569000[source]
They're not stopping the tide, they are preparing for it - as I suggested. The tide is still happening, it just isn't causing the flooding.

So in that sense we agree. Let's be like he Dutch. Let's realize the coming tide and build defenses against it.

replies(1): >>44569176 #
FeepingCreature ◴[] No.44569176[source]
They are kinda literally stopping the tide coming in though. They're preparing for it by blocking it off completely.

That is a thing that humans can do if they want it enough.

replies(2): >>44569477 #>>44569576 #
stillpointlab ◴[] No.44569576{7}[source]
As the other commenter noted, you are simply wrong about that. We control the effects the tide has on us, not the tide itself.

But let me offer you a false dichotomy for the purposes of argument:

1. You spend your efforts preventing the emergence of AI

2. You spend your efforts creating conditions for the harmonious co-existence of AI and humanity

It's your choice.

replies(1): >>44580210 #
FeepingCreature ◴[] No.44580210{8}[source]
As things stand, 2 is impossible without 1. There simply is not enough time to figure out safe coexistence. These are not projects of equal difficulty- 1 is enormously easier than 2. And 1 is still a global effort!
replies(1): >>44585682 #
stillpointlab ◴[] No.44585682{9}[source]
You have no evidence for any of your claims (either for "impossibility" or degree of difficulty) and I strongly doubt your rationalization will stand the test of validation in reality.

You are also completely moving the goal posts. My original comment was about the hubris of man to prevent processes that operate at a scale beyond his means. The processes that are driving forward the march towards AI are beyond your ability to stop. And now you are arguing (again, with no evidence) the relative difficulty of slowing it down (a much weaker claim compared to stopping it) vs. contributing to safe co-existence.

But in the interest of finding some common ground let me point out: attempting to slow it down is actually getting on board to my project (although, in a way I think is ineffective). It starts with accepting that it can't be prevented and choosing a way to contribute to safe coexistence by providing enough time to figure it out.

replies(1): >>44590660 #
1. FeepingCreature ◴[] No.44590660{10}[source]
Man's scale is Earth.

You know, I think you have no evidence for any of your claims of "impossibility" either. And I'd argue there's a ton of counterevidence where man, completely ignoring how impossible that's supposed to be, effects change on a global scale.

You're comparing two dissimilar things. On the one hand slowing it down (which contrary to your claim that I'm moving the goalpost, is at sufficient investment effectively equal to stopping it), on the other, "contributing" to safe co-existence, which is trivially achieved by literally doing anything. I'm telling you that if we merely "contribute" to safe co-existence, we all die. The standard, and it really is the standard in any other field, is proving safe coexistence to a reasonable standard. Which should hopefully make clear where the difficulty lies: we have nothing. Even with all the interpretability research, and I'm not slagging interpretability, this field is in its absolute infancy.

"It can't be prevented" simply erases the most important distinction: if we get ASI tomorrow, we're in a fundamentally different position than if we get ASI in 50 years after a heroic global effort to work out safety, interpretability, guidance and morality.

replies(1): >>44596984 #
2. stillpointlab ◴[] No.44596984[source]
> I'm telling you that if we merely "contribute" to safe co-existence, we all die.

I hear you. I believe you are wrong.

> it really is the standard in any other field, is proving safe coexistence to a reasonable standard

No it isn't. It often becomes the standard after the fact. But pretty much every invention by man didn't go through a committee. Can you provide some counter-examples? Did the Wright brothers prove flight was safe before they got on the first plane? Did the inventors of CRISPR technology "prove" it is safe? Or human cloning? Or nuclear fission? Your very argument rests on the mistakes humans made in the past and the out-sized consequences of making the same kinds of mistakes with AI. Your argument must be: we have to do things differently this time with AI because the stakes are higher.

These are old and boring arguments. I've been watching the less wrong space since it was overcoming bias (and frankly, from before). I've heard all of the arguments they have to make.

But the content of this discussion was on inevitability and how to respond to it. The person I replied to suggested that it was a mistake to see the future as something that happens to us. It was a call to agency. I was pointing out that not all agency is equal, and hubris can lead us to actions that are not productive.

It is also the case that fear, just like hubris, can lead us to actions that aren't productive. But perhaps we should just move on from this discussion.