←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 1 comments | | HN request time: 0.21s | source
Show context
bloppe ◴[] No.44573764[source]
This inevitabilist framing rests on an often unspoken assumption: that LLM's will decisively outperform human capabilities in myriad domains. If that assumption holds true, then the inevitabilist quotes featured in the article are convincing to me. If LLM's turn out to be less worthwhile at scale than many people assume, the inevitabilist interpretation is another dream of AI summer.

Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.

replies(2): >>44574231 #>>44574334 #
1. lucianbr ◴[] No.44574334[source]
If <something> then it's inevitable, otherwise it's not? What exactly do you think "inevitable" means? If it depends on something, then by definition it is not inevitable.