←back to thread

255 points tbruckner | 2 comments | | HN request time: 0.447s | source
Show context
adam_arthur ◴[] No.37420461[source]
Even a linear growth rate of average RAM capacity would obviate the need to run current SOTA LLMs remotely in short order.

Historically average RAM has grown far faster than linear, and there really hasn't been anything pressing manufacturers to push the envelope here in the past few years... until now.

It could be that LLM model sizes keep increasing such that we continue to require cloud consumption, but I suspect the sizes will not increase as quickly as hardware for inference.

Given how useful GPT-4 is already. Maybe one more iteration would unlock the vast majority of practical use cases.

I think people will be surprised that consumers ultimately end up benefitting far more from LLMs than the providers. There's not going to be much moat or differentiation to defend margins... more of a race to the bottom on pricing

replies(8): >>37420537 #>>37420948 #>>37421196 #>>37421214 #>>37421497 #>>37421862 #>>37421945 #>>37424918 #
cs702 ◴[] No.37421497[source]
I agree: No one has any technological advantage when it comes to LLMs anymore. Some companies, like OpenAI, may have other advantages, like an ecosystem of developers. But most of the gobs of money that so many companies have burned to train giant proprietary models is unlikely to see any payback.

What I think will happen is that more companies will come to the realization it's in their best interest to open their giant models. The cost of training all those giant models is already a sunk cost. If there's no profit to be made by keeping a model proprietary, why not open it to gain or avoid losing mind-share, and to mess with competitors' plans?

First, it was LLaMA, with up to 65B params, opened against Meta's wishes. Then, it was LLaMA 2, with up to 70B params, opened by Meta on purpose, to mess with Google's and Microsoft/OpenAI's plans. Now, it's Falcon 180B. Like you, I'm wondering, what comes next?

replies(4): >>37421627 #>>37422256 #>>37424763 #>>37429907 #
1. lambda_garden ◴[] No.37424763[source]
> LLaMA, with up to 65B params, opened against Meta's wishes

They sure didn't try very hard to secure it. I wonder if it was their strategy all along.

replies(1): >>37426416 #
2. AnthonyMouse ◴[] No.37426416[source]
I suspect this was the goal of some of the people inside the company but imposing some nominal terms on it was the price of getting it through the bureaucracy, or maybe required by some agreement related to some mostly irrelevant but actually present subset of the original model.

Then the inevitable occurred and made it obvious that the restrictions were both impractical to enforce and counterproductive, so they released a new one with less of them.