←back to thread

388 points reaperducer | 1 comments | | HN request time: 0.3s | source
Show context
vmg12 ◴[] No.45772274[source]
Here is a charitable perspective on what's happening:

- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.

- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.

- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.

- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.

- Nvidia also benefits from growth due to the equity they own.

This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.

replies(10): >>45772315 #>>45772353 #>>45772362 #>>45772398 #>>45775007 #>>45775428 #>>45775539 #>>45777876 #>>45778024 #>>45778343 #
belter ◴[] No.45772362[source]
> Everything seems to indicate that once the models are trained, they are extremely profitable

Some data would reinforce your case. Do you have it?

Here is my data point: "You Have No Idea How Screwed OpenAI Actually Is" - https://wlockett.medium.com/you-have-no-idea-how-screwed-ope...

replies(4): >>45772469 #>>45772482 #>>45772486 #>>45776065 #
logicprog ◴[] No.45776065[source]
I can't read your hyperbolically titled paywalled medium post, so idk if it has data I'm not aware of or is just rehashing the same stats about OpenAI & co currently losing money (mostly due to training and free users) but here's a non paywalled blog post that I personally found convincing: https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...
replies(1): >>45776876 #
1. ragingregard ◴[] No.45776876[source]
The above article is not convincing at all.

Nothing on infra costs, hardware throughput + capacity (accounting for hidden tokens) & depreciation, just a blind faith that pricing by providers "covers all costs and more". Naive estimate of 1000 tokens per search using some simplistic queries, exactly the kind of usage you don't need or want an LLM for. LLMs excel in complex queries with complex and long output. Doesn't account at all for chain-of-thought (hidden tokens) that count as output tokens by the providers but are not present in the output (surprise).

Completely skips the fact the vast majority of paid LLM users use fixed subscription pricing precisely because the API pay-per-use version would be multiples more expensive and therefore not economical.

Moving on.