←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 2 comments | | HN request time: 0.446s | source
Show context
lsy ◴[] No.44568114[source]
I think two things can be true simultaneously:

1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.

2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.

replies(26): >>44568145 #>>44568416 #>>44568799 #>>44569151 #>>44569734 #>>44570520 #>>44570663 #>>44570711 #>>44570870 #>>44571050 #>>44571189 #>>44571513 #>>44571570 #>>44572142 #>>44572326 #>>44572360 #>>44572627 #>>44572898 #>>44573137 #>>44573370 #>>44573406 #>>44574774 #>>44575820 #>>44577486 #>>44577751 #>>44577911 #
giancarlostoro ◴[] No.44572326[source]
> 2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

You hit the nail on why I say to much hatred from "AI Bros" as I call them, when I say it will not take off truly until it runs on your phone effortlessly, because nobody wants to foot a trillion dollar cloud bill.

Give me a fully offline LLM that fits in 2GB of VRAM and lets refine that so it can plug into external APIs and see how much farther we can take things without resorting to burning billions of dollars' worth of GPU compute. I don't care that my answer arrives instantly, if I'm doing the research myself, I want to take my time to get the correct answer anyway.

replies(2): >>44573467 #>>44573735 #
DSingularity ◴[] No.44573467[source]
You aren’t extrapolating enough. Nearly the entire history of computing has been one that isolates between shared computing and personal computing. Give it time. These massive cloud bills are building the case for accelerators in phones. It’s going to happen just needs time.
replies(1): >>44574143 #
1. giancarlostoro ◴[] No.44574143[source]
That's fine, that's what I want ;) I just grow tired of people hating on me for thinking that we really need to localize the models for them to take off.
replies(1): >>44577939 #
2. DSingularity ◴[] No.44577939[source]
I’m not sure why people are hating on you. If you love being free then you should love the idea of being independent when it comes to common computing. If LLM is to become common we should all be rooting for open weights and efficient local execution.

It’s gonna take some time but it’s inevitable I think.