←back to thread

Anthropic raises $13B Series F

(www.anthropic.com)
585 points meetpateltech | 2 comments | | HN request time: 0s | source
Show context
llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
duxup ◴[] No.45105396[source]
It's not clear to me that each new generation of models is going to be "that" much better vs cost.

Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...

Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.

replies(9): >>45105636 #>>45105699 #>>45105746 #>>45105777 #>>45105835 #>>45106211 #>>45106364 #>>45106367 #>>45106463 #
renegade-otter ◴[] No.45105699[source]
We do seem to be hitting the top of the curve of diminishing returns. Forget AGI - they need a performance breakthrough in order to stop shoveling money into this cash furnace.
replies(6): >>45105775 #>>45105790 #>>45105830 #>>45105936 #>>45105998 #>>45106035 #
reissbaker ◴[] No.45106035[source]
According to Dario, each model line has generally been profitable: i.e. $200MM to train a model that makes $1B in profit over its lifetime. But, since each model has been more and more expensive to train, they keep needing to raise more money to train the next generation of model, and the company balance sheet looks negative: i.e. they spent more this year than last (since the training cost for model N+1 is higher), and the model this year made less money this year than they spent (even if the model generation itself was profitable, model N isn't profitable enough to train model N+1 without raising — and spending — more money).

That's still a pretty good deal for an investor: if I give you $15B, you will probably make a lot more than $15B with it. But it does raise questions about when it will simply become infeasible to train the subsequent model generation due to the costs going up so much (even if, in all likelihood, that model would eventually turn a profit).

replies(8): >>45106645 #>>45106689 #>>45106988 #>>45107665 #>>45108456 #>>45110567 #>>45112144 #>>45112270 #
dom96 ◴[] No.45106689[source]
> if I give you $15B, you will probably make a lot more than $15B with it

"probably" is the key word here, this feels like a ponzi scheme to me. What happens when the next model isn't a big enough jump over the last one to repay the investment?

It seems like this already happened with GPT-5. They've hit a wall, so how can they be confident enough to invest ever more money into this?

replies(1): >>45107077 #
bcrosby95 ◴[] No.45107077[source]
I think you're really bending over backwards to make this company seem non viable.

If model training has truly turned out to be profitable at the end of each cycle, then this company is going to make money hand over fist, and investing money to out compete the competition is the right thing to do.

Most mega corps started out wildly unprofitable due to investing into the core business... until they aren't. It's almost as if people forget the days of Facebook being seen as continually unprofitable. This is how basically all huge tech companies you know today started.

replies(3): >>45107188 #>>45109776 #>>45113886 #
serf ◴[] No.45107188[source]
>I think you're really bending over backwards to make this company seem non viable.

Having experienced Anthropic as a customer, I have a hard time thinking that their inevitable failure (something i'd bet on) will be model/capability-based, that's how bad they suck at every other customer-facing metric.

You think Amazon is frustrating to deal with? Get into a CSR-chat-loop with an uncaring LLM followed up on by an uncaring CSR.

My minimum response time with their customer service is 14 days -- 2 weeks -- while paying 200usd a month.

An LLM could be 'The Great Kreskin' and I would still try to avoid paying for that level of abuse.

replies(2): >>45107371 #>>45111602 #
1. StephenHerlihyy ◴[] No.45111602[source]
What's fun is that I have had Anthropic's AI support give me blatantly false information. It tried to tell me that I could get a full year's worth of Claude Max for only $200 dollars. When I asked if that was true it quickly backtracked and acknowledged it's mistake. I figure someone more litigious will eventually try to capitalize.
replies(1): >>45112519 #
2. nielsbot ◴[] No.45112519[source]
"Air Canada must honor refund policy invented by airline’s chatbot"

https://arstechnica.com/tech-policy/2024/02/air-canada-must-...