←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
bloppe ◴[] No.44573764[source]
This inevitabilist framing rests on an often unspoken assumption: that LLM's will decisively outperform human capabilities in myriad domains. If that assumption holds true, then the inevitabilist quotes featured in the article are convincing to me. If LLM's turn out to be less worthwhile at scale than many people assume, the inevitabilist interpretation is another dream of AI summer.

Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.

replies(2): >>44574231 #>>44574334 #
xandrius ◴[] No.44574231[source]
LLMs have already been absolutely worthwhile in many of my projects, so I guess it's already inevitable for me.
replies(2): >>44574412 #>>44577028 #
1. bloppe ◴[] No.44577028{3}[source]
I agree that I get a lot of value out of LLMs. But I also have to clean up after them a lot of the time. It's a far cry from being able to replace a skilled developer working on something non-trivial.