←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 1 comments | | HN request time: 0s | source
Show context
bloppe ◴[] No.44573764[source]
This inevitabilist framing rests on an often unspoken assumption: that LLM's will decisively outperform human capabilities in myriad domains. If that assumption holds true, then the inevitabilist quotes featured in the article are convincing to me. If LLM's turn out to be less worthwhile at scale than many people assume, the inevitabilist interpretation is another dream of AI summer.

Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.

replies(2): >>44574231 #>>44574334 #
xandrius ◴[] No.44574231[source]
LLMs have already been absolutely worthwhile in many of my projects, so I guess it's already inevitable for me.
replies(2): >>44574412 #>>44577028 #
dmbche ◴[] No.44574412[source]
>that LLM's will decisively outperform human capabilities in myriad domains.

Do your LLMs outperform you at your tasks?

If not, were they to become more expensive by a non negligible margin, would you at any cost keep using them in their curent state?

replies(2): >>44574975 #>>44575165 #
1. JyB ◴[] No.44575165[source]
It doesn’t have to be as performant nor fast. It can work and iterate alone when setup properly. All time spent is purely bonus. It is already inevitable.