←back to thread

Anthropic raises $13B Series F

(www.anthropic.com)
585 points meetpateltech | 4 comments | | HN request time: 0s | source
Show context
llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
1. maqp ◴[] No.45106070[source]
>You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

I really have to wonder, how long will it be before the competition moves into who has the most wafer-scale engines. I mean, surely the GPU is a more inefficient packaging form factor than large dies with on-board HBM, with a massive single block cooler?

replies(1): >>45106202 #
2. mfro ◴[] No.45106202[source]
Sentiment I have heard is manufactories do not want to increase die size because defects per die increases at the same time.
replies(2): >>45106540 #>>45112048 #
3. Workaccount2 ◴[] No.45106540[source]
Meanwhile at Cerebras...heh

But I do believe that their cost per compute is still far more than disparate chips.

4. 15155 ◴[] No.45112048[source]
This is why chiplets are used.