←back to thread

Anthropic raises $13B Series F

(www.anthropic.com)
585 points meetpateltech | 1 comments | | HN request time: 0.222s | source
Show context
llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
duxup ◴[] No.45105396[source]
It's not clear to me that each new generation of models is going to be "that" much better vs cost.

Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...

Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.

replies(9): >>45105636 #>>45105699 #>>45105746 #>>45105777 #>>45105835 #>>45106211 #>>45106364 #>>45106367 #>>45106463 #
ACCount37 ◴[] No.45105777[source]
The raw model scale is not increasing by much lately. AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available. Models that are much larger than the current frontier are still too expensive to train, and far too expensive to serve them en masse.

In the meanwhile, "better data", "better training methods" and "more training compute" are the main ways you can squeeze out more performance juice without increasing the scale. And there are obvious gains to be had there.

replies(2): >>45105841 #>>45106639 #
xnx ◴[] No.45105841[source]
> AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available.

Does this apply to Google that is using custom built TPUs while everyone else uses stock Nvidia?

replies(1): >>45106122 #
ACCount37 ◴[] No.45106122[source]
By all accounts, what's in Google's racks right now (TPU v5e, v6e) is vaguely H100-adjacent, in both raw performance and supported model size.

If Google wants anything better than that? They, too, have to wait for the new hardware to arrive. Chips have a lead time - they may be your own designs, but you can't just wish them into existence.

replies(1): >>45106457 #
1. xxpor ◴[] No.45106457[source]
Aren't chips + memory constrained by process + reticle size? And therefore, how much HBM you can stuff around the compute chip? I'd expect everyone to more or less support the same model size at the same time because of this, without a very fundamentally different architecture.