←back to thread

Anthropic raises $13B Series F

(www.anthropic.com)
585 points meetpateltech | 1 comments | | HN request time: 0s | source
Show context
llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
jayd16 ◴[] No.45105619[source]
In this imaginary timeline where initial investments keep increasing this way, how long before we see a leak shutter a company? Once the model is out, no one would pay for it, right?
replies(6): >>45105704 #>>45105708 #>>45105778 #>>45105857 #>>45106040 #>>45112321 #
1. petesergeant ◴[] No.45112321[source]
gpt-oss-120b has cost OpenAI virtually all of my revenue, because I can pay Cerebras and Groq a fraction of what I was paying for o4-mini and get dramatically faster inference, for a model that passes my eval suite. This is to say, I think high-quality "open" models that are _good enough_ are a much bigger threat. Even more so since OpenRouter has essentially commoditized generation.

Each new commercial model needs to not just be better than the previous version, it needs to be significantly better than the SOTA open models for the bread-and-butter generation that I'm willing to pay the developer a premium to use their resources for generation.