←back to thread

LLM Inevitabilism

(tomrenner.com)
1611 points SwoopsFromAbove | 8 comments | | HN request time: 0.001s | source | bottom
Show context
keiferski ◴[] No.44568304[source]
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

replies(15): >>44568532 #>>44568602 #>>44568862 #>>44568899 #>>44569025 #>>44569218 #>>44569429 #>>44571000 #>>44571224 #>>44571418 #>>44572498 #>>44573222 #>>44573302 #>>44578191 #>>44578192 #
guelo ◴[] No.44569025[source]
Sorry I don't buy your argument.

(First I disagree with A Secular Age's thesis that secularism is a new force. Christian and Muslim churches were jailing and killing nonbelievers from the beginning. People weren't dumber than we are today, all the absurdity and self-serving hypocrisy that turns a lot of people off to authoritarian religion were as evident to them as they are to us.)

The idea is not that AI is on a pre-planned path, it's just that technological progress will continue, and from our vantage point today predicting improving AI is a no brainer. Technology has been accelerating since the invention of fire. Invention is a positive feedback loop where previous inventions enable new inventions at an accelerating pace. Even when large civilizations of the past collapsed and libraries of knowledge were lost and we entered dark ages human ingenuity did not rest and eventually the feedback loop started up again. It's just not stoppable. I highly recommend Scott Alexander's essay Meditations On Moloch on why tech will always move forward, even when the results are disastrous to humans.

replies(2): >>44569118 #>>44570647 #
1. keiferski ◴[] No.44569118[source]
That isn’t the argument of the book, so I don’t think you actually read it, or even the Wikipedia page?

The rest of your comment doesn’t really seem related to my argument at all. I didn’t say technological process stops or slows down, I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

replies(2): >>44569649 #>>44579995 #
2. TeMPOraL ◴[] No.44569649[source]
> I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify.

It's not lazy to follow thought patterns that yield correct predictions. And that's the bedrock on which "AI hype" grows and persists - because these tools are actually useful, right now, today, across wide variety of work and life tasks, and we are barely even trying.

> And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

Name three.

(I do expect you to be able to name three, but that should also highlight how unusual that is, and how questionable the effectiveness of that is in practice when you dig into details.)

Also I challenge you to find but one restriction that actually denies countries useful capabilities that they cannot reproduce through other means.

replies(1): >>44570065 #
3. keiferski ◴[] No.44570065[source]
Doesn’t seem that rare to me – chemical, biological, nuclear weapons are all either not acceptable to use or not even acceptable to possess. Global governments go to extreme lengths to prevent the proliferation of nuclear weapons. If there were no working restrictions on the development of the tech and the acquisition of needed materials, every country and large military organization would probably have a nuclear weapons program.

Other examples are: human cloning, GMOs or food modification (depends on the country; some definitely have restricted this on their food supply), certain medical procedures like lobotomies.

I don’t quite understand your last sentence there, but if I understand you correctly, it would seem to me like Ukraine or Libya are pretty obvious examples of countries that faced nuclear restrictions and could not reproduce their benefits through other means.

replies(2): >>44574028 #>>44574029 #
4. TeMPOraL ◴[] No.44574029{3}[source]
> Global governments go to extreme lengths to prevent the proliferation of nuclear weapons. If there were no working restrictions on the development of the tech and the acquisition of needed materials, every country and large military organization would probably have a nuclear weapons program.

Nuclear is special due to MAD doctrine; restrictions are aggressively enforced for safety reasons and to preserve status quo, much more so than for moral reasons - and believe me, every country would love to have a nuclear weapons program, simply because, to put it frankly, you're not fully independent without nukes. Nuclear deterrent is what buys you strategic autonomy.

It's really the one weird case where those who got there first decided to deny their advantage to others, and most others just begrudgingly accept this state of affairs - as unfair as it is, it's the local equilibrium in global safety.

But that's nukes, nukes are special. AI is sometimes painted like the second invention that could become special in this way, but I personally doubt it - to me, AI is much more like biological weapons than nuclear ones: it doesn't work as a deterrent (so no MAD), but is ideal for turning a research mishap into an extinction-level event.

> Other examples are: human cloning, GMOs or food modification (depends on the country; some definitely have restricted this on their food supply), certain medical procedures like lobotomies.

Human cloning - I'd be inclined to grant you that one, though I haven't checked what's up with China recently. GMO restrictions are local policy issues, and don't affect R&D on a global scale all that much. Lobotomy - fair. But then it didn't stop the field of neurosurgery at all.

> I don’t quite understand your last sentence there, but if I understand you correctly, it would seem to me like Ukraine or Libya are pretty obvious examples of countries that faced nuclear restrictions and could not reproduce their benefits through other means.

Right, the invasion of Ukraine is exactly why no nuclear-capable country will even consider giving nukes up. This advantage cannot be reproduced through other means in enough situations. But I did mean it more generally, so let me rephrase it:

Demand begets supply. If there's a strong demand for some capability, but the means of providing it are questionable, then whether or not they can be successfully suppressed depends on whether there are other ways of meeting the demand.

Nuclear weapons are, again, special - they have no substitute, but almost everyone gains more from keeping the "nuclear club" closed than from joining it. But even as there are international limits, just observe how far nations go to skirt them to keep the R&D going (look no further than NIF - aka. "let's see far we can push nuclear weapons research if we substitute live tests with lasers and a lot of computer simulations").

Biological and chemical weapons are effectively banned (+/- recent news about Russia), but don't provide unique and useful capabilities on a battlefield, so there's not much demand for them.

(Chemical weapons showing up in the news now only strengthens the overall point: it's easy to refrain from using/developing things you don't need - but then restrictions and treaties fly out the window the moment you're losing and run out of alternatives.)

Same for full-human cloning - but there is demand for transplantable organs, as well as better substrate for pharmaceutical testing; the former can be met cheaper through market and black market means, while the latter is driving several fields of research that are adjacent to human cloning, but more focused on meeting the actual demand and coincidentally avoid most of the ethical concerns raised.

And so on, and so on. Circling back to AI, what I'm saying is, AI is already providing too much direct, object-level utility that cannot be substituted by other means (itself being a cheaper substitute for human labor). The demand is already there, so it's near-impossible to stop the tide at this point. You simply won't get people to agree on this.

5. stale2002 ◴[] No.44574028{3}[source]
I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago.

The same is not true about LLMs.

No, LLMs aren't going to be stopped when anyone with a computer from the last couple years is able to run them on their desktop. (There are smaller LLMs that can be even run on your mobile phone!).

The laws required to stop this would be draconian. It would require full government monitoring of all computers. And any country or group that "defects" by allowing people to use LLMs, would gain a massive benefit.

replies(2): >>44574215 #>>44576167 #
6. TeMPOraL ◴[] No.44574215{4}[source]
Yup. The government of the world could shut down all LLM providers tomorrow, and it wouldn't change a thing - LLMs fundamentally are programs, not a service. There are models lagging 6-12 months behind current SOTA, that you can just download and run on your own GPU today; most research is in the open too, so nothing stops people from continuing it and training new models locally.z

At this point, AI research is not possible to stop without killing humanity as technological civilization - and it's not even possible to slow it down much, short of taking extreme measures Eliezer Yudkowsky was talking about years ago: yes, it would literally take a multinational treaty on stopping advanced compute, and aggressively enforcing it - including (but not limited to) by preemptively bombing rogue data centers as they pop up around the world.

7. ben_w ◴[] No.44576167{4}[source]
> I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago.

You make be surprised to learn that you can make a chemical weapon on your gaming graphics card from 5 years ago.

It's just that it will void the warranty well before you have a meaningful quantity of chlorine gas from the salt water you dunked it in while switched on.

8. munksbeer ◴[] No.44579995[source]
> And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons

Which examples? Despite curtailment, new countries have acquired nuclear weapons over time.

Efforts to squash technology exist, such as cloning bans, and so on, but they will only work for so long. You might think I'm making a "predestination" argument here, but I'm not. I'm observing the powerful incentives at play (first past the post advantage), noting that historically technology has always advanced, and making a bet that technology will continue to advance. I am supremely confident in that bet. I could of course go out and protest, but there is also a part that doesn't seem present in the original post and your argument, many (maybe most) of us don't want to stop technological progress.