←back to thread

255 points tbruckner | 1 comments | | HN request time: 0.228s | source
Show context
adam_arthur ◴[] No.37420461[source]
Even a linear growth rate of average RAM capacity would obviate the need to run current SOTA LLMs remotely in short order.

Historically average RAM has grown far faster than linear, and there really hasn't been anything pressing manufacturers to push the envelope here in the past few years... until now.

It could be that LLM model sizes keep increasing such that we continue to require cloud consumption, but I suspect the sizes will not increase as quickly as hardware for inference.

Given how useful GPT-4 is already. Maybe one more iteration would unlock the vast majority of practical use cases.

I think people will be surprised that consumers ultimately end up benefitting far more from LLMs than the providers. There's not going to be much moat or differentiation to defend margins... more of a race to the bottom on pricing

replies(8): >>37420537 #>>37420948 #>>37421196 #>>37421214 #>>37421497 #>>37421862 #>>37421945 #>>37424918 #
ls612 ◴[] No.37420948[source]
For me the test is; when will a Siri-LLM be able to run locally on my iPhone at at least GPT-4 levels? 2030? Farther out? Never because of governments forbidding it? To what extent will improvements be driven by the last gasps of Moore’s Law vs by improving model architectures to be more efficient?
replies(3): >>37420983 #>>37421670 #>>37422133 #
adam_arthur ◴[] No.37420983[source]
Given that phones are a few years behind PCs on RAM, likely whenever the average PC can do it, plus a few years. There are phones out there with 24GB of RAM already, it looks like.

Of course battery life would be a concern there, so I think LLM usage on phones will remain in the cloud.

Haven't studied phone RAM capacity growth rates in detail though

replies(2): >>37421363 #>>37425019 #
baq ◴[] No.37425019[source]
Wonder if someone is thinking of LLM specific RAM, slower but much denser. Bonus points for not having to reload the model after power cycling.

Maybe call this fantastic technology something idiotic like 3d XPoint?

replies(2): >>37425474 #>>37426620 #
1. ronsor ◴[] No.37425474[source]
The problem with that is LLM speed is mostly bottlenecked by memory bandwidth. Slower RAM means worse performance.