←back to thread

507 points martinald | 2 comments | | HN request time: 0s | source
Show context
sc68cal ◴[] No.45053212[source]
This whole article is built off using DeepSeek R1, which is a huge premise that I don't think is correct. DeepSeek is much more efficient and I don't think it's a valid way to estimate what OpenAI and Anthropic's costs are.

https://www.wheresyoured.at/deep-impact/

Basically, DeepSeek is _very_ efficient at inference, and that was the whole reason why it shook the industry when it was released.

replies(7): >>45053283 #>>45053303 #>>45053401 #>>45053455 #>>45053507 #>>45053923 #>>45054034 #
dcre ◴[] No.45054034[source]
What are we meant to take away from the 8000 word Zitron post?

In any case, here is what Anthropic CEO Dario Amodei said about DeepSeek:

"DeepSeek produced a model close to the performance of US models 7-10 months older, for a good deal less cost (but not anywhere near the ratios people have suggested)"

"DeepSeek-V3 is not a unique breakthrough or something that fundamentally changes the economics of LLM’s; it’s an expected point on an ongoing cost reduction curve. What’s different this time is that the company that was first to demonstrate the expected cost reductions was Chinese."

https://www.darioamodei.com/post/on-deepseek-and-export-cont...

We certainly don't have to take his word for it, but the claim is that DeepSeek's models are not much more efficient to train or inference than closed models of comparable quality. Furthermore, both Amodei and Sam Altman have recently claimed that inference is profitable:

Amodei: "If you consider each model to be a company, the model that was trained in 2023 was profitable. You paid $100 million, and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume, in this cartoonish cartoon example, that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model, in this example, is actually profitable.

What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's much more expensive and requires much more upfront R&D investment. And so the way that it's going to shake out is this will keep going up until the numbers go very large and the models can't get larger, and then it'll be a large, very profitable business, or, at some point, the models will stop getting better, right? The march to AGI will be halted for some reason, and then perhaps it'll be some overhang. So, there'll be a one-time, 'Oh man, we spent a lot of money and we didn't get anything for it.' And then the business returns to whatever scale it was at."

https://cheekypint.substack.com/p/a-cheeky-pint-with-anthrop...

Altman: "If we didn’t pay for training, we’d be a very profitable company."

https://www.theverge.com/command-line-newsletter/759897/sam-...

replies(2): >>45054258 #>>45054602 #
1. gmerc ◴[] No.45054258[source]
Grok 3.5: 400M training run DeepSeek R1: 5M training run Released around the same time, marginal performance difference.
replies(1): >>45055077 #
2. dcre ◴[] No.45055077[source]
I suspect that says more about Grok than anything else.