←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 1 comments | | HN request time: 0.288s | source
1. bemmu ◴[] No.44570035[source]
I was going to make an argument that it's inevitable, because at some point compute will get so cheap that someone could just train one at home, and since the knowledge of how to do it is out there, people will do it.

But seeing that a company like Meta is using >100k GPUs to train these models, even at 25% yearly improvement it would still take until the year ~2060 before someone could buy 50 GPUs and have the equivalent power to train one privately. So I suppose if society decided to outlaw LLM training, or a market crash put off companies from continuing to do it, it might be possible to put the genie back in the bottle for a few decades.

I wouldn't be surprised however if there are still 10x algorithmic improvements to be found too...