←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 3 comments | | HN request time: 0.001s | source
Show context
keiferski ◴[] No.44568304[source]
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

replies(15): >>44568532 #>>44568602 #>>44568862 #>>44568899 #>>44569025 #>>44569218 #>>44569429 #>>44571000 #>>44571224 #>>44571418 #>>44572498 #>>44573222 #>>44573302 #>>44578191 #>>44578192 #
guelo ◴[] No.44569025[source]
Sorry I don't buy your argument.

(First I disagree with A Secular Age's thesis that secularism is a new force. Christian and Muslim churches were jailing and killing nonbelievers from the beginning. People weren't dumber than we are today, all the absurdity and self-serving hypocrisy that turns a lot of people off to authoritarian religion were as evident to them as they are to us.)

The idea is not that AI is on a pre-planned path, it's just that technological progress will continue, and from our vantage point today predicting improving AI is a no brainer. Technology has been accelerating since the invention of fire. Invention is a positive feedback loop where previous inventions enable new inventions at an accelerating pace. Even when large civilizations of the past collapsed and libraries of knowledge were lost and we entered dark ages human ingenuity did not rest and eventually the feedback loop started up again. It's just not stoppable. I highly recommend Scott Alexander's essay Meditations On Moloch on why tech will always move forward, even when the results are disastrous to humans.

replies(2): >>44569118 #>>44570647 #
keiferski ◴[] No.44569118{3}[source]
That isn’t the argument of the book, so I don’t think you actually read it, or even the Wikipedia page?

The rest of your comment doesn’t really seem related to my argument at all. I didn’t say technological process stops or slows down, I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

replies(2): >>44569649 #>>44579995 #
TeMPOraL ◴[] No.44569649{4}[source]
> I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify.

It's not lazy to follow thought patterns that yield correct predictions. And that's the bedrock on which "AI hype" grows and persists - because these tools are actually useful, right now, today, across wide variety of work and life tasks, and we are barely even trying.

> And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.

Name three.

(I do expect you to be able to name three, but that should also highlight how unusual that is, and how questionable the effectiveness of that is in practice when you dig into details.)

Also I challenge you to find but one restriction that actually denies countries useful capabilities that they cannot reproduce through other means.

replies(1): >>44570065 #
keiferski ◴[] No.44570065{5}[source]
Doesn’t seem that rare to me – chemical, biological, nuclear weapons are all either not acceptable to use or not even acceptable to possess. Global governments go to extreme lengths to prevent the proliferation of nuclear weapons. If there were no working restrictions on the development of the tech and the acquisition of needed materials, every country and large military organization would probably have a nuclear weapons program.

Other examples are: human cloning, GMOs or food modification (depends on the country; some definitely have restricted this on their food supply), certain medical procedures like lobotomies.

I don’t quite understand your last sentence there, but if I understand you correctly, it would seem to me like Ukraine or Libya are pretty obvious examples of countries that faced nuclear restrictions and could not reproduce their benefits through other means.

replies(2): >>44574028 #>>44574029 #
1. stale2002 ◴[] No.44574028{6}[source]
I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago.

The same is not true about LLMs.

No, LLMs aren't going to be stopped when anyone with a computer from the last couple years is able to run them on their desktop. (There are smaller LLMs that can be even run on your mobile phone!).

The laws required to stop this would be draconian. It would require full government monitoring of all computers. And any country or group that "defects" by allowing people to use LLMs, would gain a massive benefit.

replies(2): >>44574215 #>>44576167 #
2. TeMPOraL ◴[] No.44574215[source]
Yup. The government of the world could shut down all LLM providers tomorrow, and it wouldn't change a thing - LLMs fundamentally are programs, not a service. There are models lagging 6-12 months behind current SOTA, that you can just download and run on your own GPU today; most research is in the open too, so nothing stops people from continuing it and training new models locally.z

At this point, AI research is not possible to stop without killing humanity as technological civilization - and it's not even possible to slow it down much, short of taking extreme measures Eliezer Yudkowsky was talking about years ago: yes, it would literally take a multinational treaty on stopping advanced compute, and aggressively enforcing it - including (but not limited to) by preemptively bombing rogue data centers as they pop up around the world.

3. ben_w ◴[] No.44576167[source]
> I can't make a nuclear or chemical weapon on my gaming graphics card from 5 years ago.

You make be surprised to learn that you can make a chemical weapon on your gaming graphics card from 5 years ago.

It's just that it will void the warranty well before you have a meaningful quantity of chlorine gas from the salt water you dunked it in while switched on.