←back to thread

Anthropic raises $13B Series F

(www.anthropic.com)
585 points meetpateltech | 1 comments | | HN request time: 0.294s | source
Show context
llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
duxup ◴[] No.45105396[source]
It's not clear to me that each new generation of models is going to be "that" much better vs cost.

Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...

Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.

replies(9): >>45105636 #>>45105699 #>>45105746 #>>45105777 #>>45105835 #>>45106211 #>>45106364 #>>45106367 #>>45106463 #
yieldcrv ◴[] No.45105746[source]
Locally run video models that are just as good as today’s closed models are going to be the watershed moment

The companies doing foundational video models have stakeholders that don’t want to be associated with what people really want to generate

But they are pushing the space forward and the uncensored and unrestricted video model is coming

replies(3): >>45105817 #>>45105903 #>>45110285 #
xenobeb ◴[] No.45110285[source]
The problem is the video models are only impressive in news stories about the video models. When you actually try to use them you can see how the marketing is playing to people's imagination because they are such a massive disappointment.
replies(1): >>45110422 #
1. xnx ◴[] No.45110422[source]
Not my experience. Have you used Veo 3?