←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
keiferski ◴[] No.44568304[source]
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

replies(15): >>44568532 #>>44568602 #>>44568862 #>>44568899 #>>44569025 #>>44569218 #>>44569429 #>>44571000 #>>44571224 #>>44571418 #>>44572498 #>>44573222 #>>44573302 #>>44578191 #>>44578192 #
theSherwood ◴[] No.44569429[source]
I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.
replies(1): >>44569917 #
martindbp ◴[] No.44569917[source]
Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.
replies(2): >>44570102 #>>44570198 #
bonoboTP ◴[] No.44570102[source]
It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.

It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.

From that point of view, sure, nothing new under the sun.

It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.

Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".

Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".

replies(1): >>44570142 #
keiferski ◴[] No.44570142[source]
I think you’re missing the point of the blog post and the point of my grandparent comment, which is that there is a pervasive attitude amongst technologists that “it’s just gonna happen anyway and therefore whether I work on something negative for the world or not makes no difference, and therefore I have no role as an ethical agent.” It’s a way to avoid responsibility and freedom.

We are not discussing the likelihood of some particular scenario based on models and numbers and statistics and predictions by Very Smart Important People.

replies(2): >>44570218 #>>44570341 #
bonoboTP ◴[] No.44570218[source]
I'm not sure how common that is... I'd guess most who work on it think that there's a positive future with LLMs also. I mean they likely wouldn't say "I work on something negative for the world".
replies(1): >>44570246 #
1. keiferski ◴[] No.44570246[source]
I think the vast majority of people are there because it’s interesting work and they’re being paid exceptionally well. That’s the extent to which 95/100 of employees engage with the ethics of their work.