Most active commenters
  • theSherwood(4)
  • bonoboTP(3)
  • keiferski(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 12 comments | | HN request time: 1.244s | source | bottom
Show context
keiferski ◴[] No.44568304[source]
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

replies(15): >>44568532 #>>44568602 #>>44568862 #>>44568899 #>>44569025 #>>44569218 #>>44569429 #>>44571000 #>>44571224 #>>44571418 #>>44572498 #>>44573222 #>>44573302 #>>44578191 #>>44578192 #
1. theSherwood ◴[] No.44569429[source]
I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.
replies(1): >>44569917 #
2. martindbp ◴[] No.44569917[source]
Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.
replies(2): >>44570102 #>>44570198 #
3. bonoboTP ◴[] No.44570102[source]
It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.

It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.

From that point of view, sure, nothing new under the sun.

It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.

Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".

Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".

replies(1): >>44570142 #
4. keiferski ◴[] No.44570142{3}[source]
I think you’re missing the point of the blog post and the point of my grandparent comment, which is that there is a pervasive attitude amongst technologists that “it’s just gonna happen anyway and therefore whether I work on something negative for the world or not makes no difference, and therefore I have no role as an ethical agent.” It’s a way to avoid responsibility and freedom.

We are not discussing the likelihood of some particular scenario based on models and numbers and statistics and predictions by Very Smart Important People.

replies(2): >>44570218 #>>44570341 #
5. welferkj ◴[] No.44570198[source]
Nobody serious is claiming theological predesination is based on "nothing", either. Talk about poor pattern matching.
replies(1): >>44570477 #
6. bonoboTP ◴[] No.44570218{4}[source]
I'm not sure how common that is... I'd guess most who work on it think that there's a positive future with LLMs also. I mean they likely wouldn't say "I work on something negative for the world".
replies(1): >>44570246 #
7. keiferski ◴[] No.44570246{5}[source]
I think the vast majority of people are there because it’s interesting work and they’re being paid exceptionally well. That’s the extent to which 95/100 of employees engage with the ethics of their work.
8. theSherwood ◴[] No.44570341{4}[source]
I agree that "very likely" is not "inevitable". It is possible for the advance of AI to stop, but difficult. I agree that doesn't absolve people of responsibility for what they do. But I disagree with the comparison to religious predestination.
9. theSherwood ◴[] No.44570477{3}[source]
You are, of course, entitled to your religious convictions. But to most people outside of your religious community, the evidence for some specific theological claim (such as predestination) looks an awful lot like "nothing". In contrast, claims about the trajectory of AI (whether you agree with the claims or not) are based on easily-verifiable, public knowledge about the recent history of AI development.
replies(1): >>44570803 #
10. welferkj ◴[] No.44570803{4}[source]
It is not a "specific theological claim" either, rather a school of theological discourse. You're literally doing free-form association now and pretending to have novel insights into centuries of work on the issue.
replies(1): >>44570960 #
11. theSherwood ◴[] No.44570960{5}[source]
I'm not pretending to any novel insights. Most of us who don't have much use for theology are generally unimpressed by its discourse. Not novel at all. And the "centuries of work" without concrete developments that exist outside of the minds of those invested in the discourse is one reason why many of us are unimpressed. In contrast, AI development is resulting in concrete changes that are easily verified by anyone and on much shorter time scales.
replies(1): >>44571291 #
12. bonoboTP ◴[] No.44571291{6}[source]
Relatedly, it would be bordering on impossible to convince Iran about the validity of Augustine, Aquinas or Calvin, but it was fairly easy with nuclear physics. Theology isn't "based on nothing", but the convincing power of the quantum physics books happens to be radically different from Summa Theologiae, even if both are just books written by educated people based on a lot of thought and prior work.