←back to thread

255 points tbruckner | 8 comments | | HN request time: 0.428s | source | bottom
Show context
adam_arthur ◴[] No.37420461[source]
Even a linear growth rate of average RAM capacity would obviate the need to run current SOTA LLMs remotely in short order.

Historically average RAM has grown far faster than linear, and there really hasn't been anything pressing manufacturers to push the envelope here in the past few years... until now.

It could be that LLM model sizes keep increasing such that we continue to require cloud consumption, but I suspect the sizes will not increase as quickly as hardware for inference.

Given how useful GPT-4 is already. Maybe one more iteration would unlock the vast majority of practical use cases.

I think people will be surprised that consumers ultimately end up benefitting far more from LLMs than the providers. There's not going to be much moat or differentiation to defend margins... more of a race to the bottom on pricing

replies(8): >>37420537 #>>37420948 #>>37421196 #>>37421214 #>>37421497 #>>37421862 #>>37421945 #>>37424918 #
MuffinFlavored ◴[] No.37421214[source]
> Given how useful GPT-4 is already. Maybe one more iteration would unlock the vast majority of practical use cases.

Unless I'm misunderstanding, doesn't OpenAI have a very vested interest to keep making their products so good/so complex/so large that consumer hobbyists can't just `git clone` an alternative that's 95% as good running locally?

replies(3): >>37421454 #>>37421498 #>>37421783 #
1. reckless ◴[] No.37421783[source]
Indeed they do, however companies like Meta (altruistically or not) are preventing OpenAI from building 'moats' by releasing models and architecture details in a very public way.
replies(2): >>37422263 #>>37422288 #
2. runjake ◴[] No.37422263[source]
I think it's a safe bet to say it's not altruistic. And, if Meta were to wrestle away OpenAI's moat, they'd eagerly create their own, given the opportunity.
replies(3): >>37422875 #>>37423084 #>>37426473 #
3. foobiekr ◴[] No.37422288[source]
Commoditize your complement strategies can just as likely put a market into a zombie state in the long run.
4. passion__desire ◴[] No.37422875[source]
Meta doesn't interact with its users in very obvious ways which MS, Google do. All its models magic happen behind the scenes. Meta can continue to release 2nd best models to undercut others and them going far too ahead. And Open Source community will take it from there. Dall-E is dead.
replies(2): >>37423063 #>>37427098 #
5. bugglebeetle ◴[] No.37423063{3}[source]
And if all open source extends their models, they can accrue those benefits back to themselves. This is already how they’ve become such a huge player in machine learning (open sourcing amazing stuff).
6. sangnoir ◴[] No.37423084[source]
> And, if Meta were to wrestle away OpenAI's moat, they'd eagerly create their own

Meta is already capable of monetizing content generated by the models: these models complement their business and they could not care less which model you're using to earn them advertising dollars, as long as you keep the (preferably high quality) content coming.

7. AnthonyMouse ◴[] No.37426473[source]
> And, if Meta were to wrestle away OpenAI's moat, they'd eagerly create their own, given the opportunity.

At which point the new underdogs would have an interest in doing to them what they're doing to OpenAI.

Assuming progress for LLMs continues at a rapid pace for an extended period of time. It's not implausible that they'll get to a certain level past which non-trivial progress is hard, and if there is an open source model at that level there isn't going to be a moat.

8. astrange ◴[] No.37427098{3}[source]
I think Dall-E isn't actually dead, but was merely renamed Bing Image Creator.