Most active commenters
  • xpe(20)
  • utyop22(19)
  • (15)
  • simianwords(13)
  • arduanika(9)
  • lelanthran(7)
  • duxup(6)
  • Zigurd(6)
  • jayd16(6)
  • tick_tock_tick(6)

Anthropic raises $13B Series F

(www.anthropic.com)
571 points meetpateltech | 629 comments | | HN request time: 3.729s | source | bottom
1. duxup ◴[] No.45105028[source]
These numbers seem made up at times / difficult to comprehend what they expect is happening ...
replies(7): >>45105155 #>>45105162 #>>45105176 #>>45105213 #>>45105228 #>>45105265 #>>45105305 #
2. chpatrick ◴[] No.45105155[source]
Depends on if you think we're at the end of AI development or the beginning.
3. Rebuff5007 ◴[] No.45105162[source]
Probably because they are made up, and no one is able to comprehend what is happening.
4. jdoliner ◴[] No.45105172[source]
Every round Anthropic raises twists the knife deeper in SBF. If only he could have survived the downturn his Antropic investment alone probably could have papered over the other loses.
replies(11): >>45105195 #>>45105238 #>>45105292 #>>45105321 #>>45105362 #>>45105437 #>>45105457 #>>45105972 #>>45106292 #>>45106294 #>>45108286 #
5. isoprophlex ◴[] No.45105176[source]
It's a post-money valuation, so that suggests the money involved has transcended beyond actual moneyness into some other post-meaningful realm.
replies(2): >>45105355 #>>45106256 #
6. paulpauper ◴[] No.45105178[source]
FTX creditors should be seeing red. the trustee sold Anthropic out at the bottom. Same for crypto. Hindsight is 20-20, but imagine had CZ not made those tweets of divesting from the FTT token. FTX could have possibly weathered the final 3 months of the BTC bear market and then reaped the post-2023 AI and crypto bull market. Sam would have gone from pauper in jail to brilliant investor in Anthropic, mogul, and so on.
replies(2): >>45105267 #>>45105716 #
7. paulpauper ◴[] No.45105195[source]
yeah, had CZ not made those tweets... He only had to weather another 2 months of the BTC bear market. BTC began to rebound in Jan 2023. Of course hindsight is 20-202
replies(1): >>45106019 #
8. usrnm ◴[] No.45105203[source]
I feel like the money itself makes less and less sense these days. It's just numbers that are becoming increasingly detached from the real world
replies(9): >>45105400 #>>45105471 #>>45105483 #>>45105548 #>>45105579 #>>45105803 #>>45105863 #>>45105974 #>>45106999 #
9. perks_12 ◴[] No.45105213[source]
Look at this post: https://x.com/NicoleSHsing/status/1961505968782774778

We're in a VC bubble; any project that mentions AI gets tons of money.

replies(2): >>45105291 #>>45105479 #
10. aaronblohowiak ◴[] No.45105228[source]
alphabet is "worth" 2.45 trillion on the public market, is anthropic worth a bit less than 10% of google going forward? I don't think that's entirely unreasonable...
replies(4): >>45105327 #>>45105405 #>>45105431 #>>45105612 #
11. ramesh31 ◴[] No.45105238[source]
>"Every round Anthropic raises twists the knife deeper in SBF. If only he could have survived the downturn his Antropic investment alone probably could have papered over the other loses."

Things working out in the end doesn't make what he did not a crime at the time. He was a common paper hanger, albeit with billions instead.

replies(1): >>45105999 #
12. paulpauper ◴[] No.45105265[source]
People said the same about Open AI in 2023, only valued at $30 billion at the time, and then seemingly overnight Chat GPT become a major commercial product rivaling Google. Or Tesla valuations in 2019. It went from a niche brand to Teslas everywhere after Covid. These VCs are not as irrational as commonly assumed. They know if a product gains critical mass , it can become everything.
replies(3): >>45105363 #>>45105528 #>>45106072 #
13. ealexhudson ◴[] No.45105267[source]
The trustee's reports on FTX's internal processes were damning. Even they had held their Anthropic on the way up, who's to say their internal FTT ledger and black holes in the Alameda books would not have eclipsed that?

The issue wasn't that crypto markets in general were down at that point; the issue was they were doing frauds.

replies(2): >>45105293 #>>45105347 #
14. rvz ◴[] No.45105285[source]
Enron was worth $60B - $100B once.
replies(1): >>45105336 #
15. potatoproduct ◴[] No.45105290[source]
I predict a lot of people are going to lose a lot of money.
replies(1): >>45106041 #
16. seneca ◴[] No.45105291{3}[source]
That genuinely feels like satire. I guess the beauty of good satire is that it borders on reality. The Juicero of the AI era.
17. FinnLobsien ◴[] No.45105292[source]
Always makes you wonder how many companies that are successes today could’ve had their SBF moment, but market conditions kept them afloat
replies(3): >>45105329 #>>45105484 #>>45110244 #
18. paulpauper ◴[] No.45105293{3}[source]
the frauds were stopped by Sam going to jail. There was still money left by liquidating, which in hindsight was a very poor timing.
replies(3): >>45105464 #>>45105499 #>>45105833 #
19. tinyhouse ◴[] No.45105305[source]
This is the fastest-growing company by revenue, jumping from $1B to $3B in just five months. Hitting $10B is only a matter of time, which would put its valuation at a reasonable ~18x sales multiple. It doesn't even matter where we are in the AI hype cycle - AI adoption will keep increasing, it's not even a question at this point.

From a technical perspective, they manage to attract top talent - Google / OpenAI lose a lot of good people to Anthropic. This is important since there are few people who can transform a business (e.g., the guy who built Claude Code). Being attractive for top talent means you're more likely stumble upon them.

replies(3): >>45105370 #>>45105381 #>>45109444 #
20. bambax ◴[] No.45105321[source]
Probably would have made his crimes less visible, but not less criminal.
replies(3): >>45105401 #>>45105423 #>>45106368 #
21. llamasushi ◴[] No.45105325[source]
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol

The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund

replies(48): >>45105396 #>>45105412 #>>45105420 #>>45105480 #>>45105535 #>>45105549 #>>45105604 #>>45105619 #>>45105641 #>>45105679 #>>45105738 #>>45105766 #>>45105797 #>>45105848 #>>45105855 #>>45105915 #>>45105960 #>>45105963 #>>45105985 #>>45106070 #>>45106096 #>>45106150 #>>45106272 #>>45106285 #>>45106679 #>>45106851 #>>45106897 #>>45106940 #>>45107085 #>>45107239 #>>45107242 #>>45107347 #>>45107622 #>>45107915 #>>45108298 #>>45108477 #>>45109495 #>>45110545 #>>45110824 #>>45110882 #>>45111336 #>>45111695 #>>45111885 #>>45111904 #>>45111971 #>>45112441 #>>45112552 #>>45113827 #
22. potatoproduct ◴[] No.45105327{3}[source]
Sounds hugely unreasonable. At 1% I might've believed you.
23. adamgordonbell ◴[] No.45105329{3}[source]
> In the early days of FedEx, Smith had to go to great lengths to keep the company afloat. In one instance, after a crucial business loan was denied, he took the company's last $5,000 to Las Vegas and won $27,000 gambling on blackjack to cover the company's $24,000 fuel bill.

Some who take on unreasonable risk will be among the most successful people alive. Most will lose eventually, long before you hear about them if they keep too many taking crazy risks.

Who is a great genius, and is who is just winning at "The Martingale entrepreneurial strategy"?

replies(2): >>45105687 #>>45106254 #
24. baalimago ◴[] No.45105333[source]
Prediction: this is the final big "hufff" before the bubble bursts.
replies(3): >>45105436 #>>45110122 #>>45114515 #
25. NewJazz ◴[] No.45105336[source]
Intel still is.
26. boringg ◴[] No.45105347{3}[source]
I think the implication is they fraudsters rarely get busted when they are making everyone money only when things are looking bad. Eventually it catches up though.
27. fourseventy ◴[] No.45105354[source]
I wonder what SBF's shares would be worth.
replies(1): >>45110450 #
28. saberience ◴[] No.45105355{3}[source]
Post-money just means you add the value of the actual investment into the valuation. E.g. The pre-money valuation would be 183B - 13B. i.e. pre-money valuation would be 170B
replies(1): >>45105567 #
29. stravant ◴[] No.45105362[source]
That assumes he would have stopped with the shenanigans, which is a pretty big if.
replies(1): >>45105587 #
30. duxup ◴[] No.45105363{3}[source]
I'm not sure a couple successes makes sense of these numbers.
31. miltonlost ◴[] No.45105370{3}[source]
My baby grew from 9 pounds to 18 pounds in a 3 months! Hitting 10000 lbs is only a matter of time.
replies(2): >>45106172 #>>45108491 #
32. aqme28 ◴[] No.45105381{3}[source]
I thought ~20x or so was a good baseline earnings multiple. I have no idea what makes sense as a revenue multiple but I bet it would be a lot lower than that.

Edit: After looking it up, normal P/Sales ratios are on the order of about 1. They vary from like .2 to 8 depending on industry.

replies(2): >>45106157 #>>45109828 #
33. duxup ◴[] No.45105396[source]
It's not clear to me that each new generation of models is going to be "that" much better vs cost.

Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...

Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.

replies(9): >>45105636 #>>45105699 #>>45105746 #>>45105777 #>>45105835 #>>45106211 #>>45106364 #>>45106367 #>>45106463 #
34. fullshark ◴[] No.45105400[source]
The real world sees no other opportunities for outsized returns. Too much money chasing too little opportunity.
replies(2): >>45105489 #>>45105724 #
35. toomuchtodo ◴[] No.45105401{3}[source]
Like Martin Shkreli, who made his investors whole with his gambling, but still went to jail.
replies(1): >>45105707 #
36. StopDisinfo910 ◴[] No.45105405{3}[source]
Alphabet 2024 revenue: 350 billions dollars Anthropic 2024 revenue: 1 billion dollars

Unreasonable doesn’t even start to capture it. Anthropic being worth 10% of Alphabet is beyond insane.

replies(12): >>45105590 #>>45105603 #>>45105616 #>>45105630 #>>45105683 #>>45105735 #>>45105744 #>>45106511 #>>45107650 #>>45108204 #>>45108814 #>>45108858 #
37. pnathan ◴[] No.45105411[source]
Run-rate revenue of 1b vs 3b. Those are big values.

I am very curious about the GAAP numbers here.

38. paulddraper ◴[] No.45105412[source]
Reductive.

Doesn’t explain Deepseek.

replies(1): >>45105462 #
39. scellus ◴[] No.45105420[source]
So far it doesn't seem like winner-take-all, and all the major players (OpenAI, Anthropic, xAI, Google, Meta?) are backed by strong partnerships and a lot of capital. It is capital-intensive this round though, so the primary producers are big and few. As long as they compete, benefits mostly go to other parties (= society) through increased productivity.
40. ankit219 ◴[] No.45105422[source]
Their projections for ARR at the end of this year at a high of $9B[1] at the end of this year. And reported gross margins of 60% (-30% with cloud providers partnerships). All things considered, if this pans out, it's a 20x multiple. High yes, but not that crazy. Specially considering their growth rate and that too at a decent margin at gm level.

[1]: It was $3B at the end of May (so likely $250M in May alone), and $5B at end of july (so $400M that month).

replies(3): >>45106080 #>>45110501 #>>45111948 #
41. brandall10 ◴[] No.45105423{3}[source]
Let's not pretend there aren't multitudes out there doing similar things who never get caught. SBF was just more egregious and untimely w/ his actions.
replies(3): >>45105598 #>>45106138 #>>45106151 #
42. seneca ◴[] No.45105431{3}[source]
It feels a bit unreasonable to me. Anthropic is arguably comparable to Google's Gemini program. Is Gemini 10% of Alphabet's value? If so, how much of that is because of its ability to consume and interact with things like YouTube and Workspaces?

I could see two or three percent, but this seems like a pretty big stretch. Then again, I'm not a VC.

replies(1): >>45105771 #
43. NitpickLawyer ◴[] No.45105436[source]
If we make a comparison to the dotcom bubble, this bubble will take the equivalent of catsdotcom and dogsdotcom, not the equivalent of FAANG++. And even that comparison is iffy, because we just don't know where the end is with this one. We've seen capabilities only increase so far. We've also seen prices decrease by orders of magnitude between SotA "generations". Things continue to scale, and no one knows how far it'll go. There's a reason everyone is doing the coding agent cli of the month, and everyone is heavily subsidising coding - data, more data, and crucially (hah) signals on generation quality, acceptance rate and so on. Take that, put it in the new generation, training goes brr, post-training goes RL, etc.
44. LarsDu88 ◴[] No.45105437[source]
So the difference between criminal fraud, and precient genius investor is a difference of a year or so.

We should all try to remember this the next time we vote to cut taxes on billionaires.

replies(1): >>45106181 #
45. me551ah ◴[] No.45105447[source]
I don't get the sky high valuation of LLM companies. I mean I get that these guys need a lot of money for compute to train the next generation of models. But Distillation does make it easy for other providers to replicate gains made by these providers at a much lower cost.

On a long enough timeframe, the open source models will catch up to the proprietary models and inference providers will beat these proprietary companies on price.

replies(1): >>45105685 #
46. bobbiechen ◴[] No.45105456[source]
Did anyone else get offers to join single-purpose ventures (SPVs) to invest in this Anthropic round?

I got the impression that some people were reselling access and adding layers of fees to profit from the hype.

replies(2): >>45105648 #>>45110728 #
47. stefan_ ◴[] No.45105457[source]
What if we put criminals into prison because they committed crimes, regardless of them making their victims "whole" (it would not happen anyway).
48. FergusArgyll ◴[] No.45105462{3}[source]
Deepseek story was way overblown. Read the gpt-oss paper, the actual training run is not the only expense. You have multiple experimental training runs as well as failed training runs. + they were behind SOTA even then
49. ealexhudson ◴[] No.45105464{4}[source]
Sure, but would we really want to tell liquidators to manage assets for best eventual return rather than just convert everything to cash? In this instance, in hindsight, sure - you'd want the other thing, you want the bitcoin not the cash. But this feels like the exception that proves the rule.
50. Hamuko ◴[] No.45105471[source]
So, a bubble?
51. koakuma-chan ◴[] No.45105479{3}[source]
What's wrong with that post?
replies(1): >>45106332 #
52. nradov ◴[] No.45105480[source]
That's why wealthy investors connected to the AI industry are also throwing a lot of money into power generation startups, particularly fusion power. I doubt that any of them will actually deliver commercially viable fusion reactors but hope springs eternal.
replies(2): >>45105618 #>>45105953 #
53. waynenilsen ◴[] No.45105483[source]
comparisons with internet age very much resonate - dark compute will be as dark fiber was
replies(3): >>45105662 #>>45105993 #>>45106719 #
54. hn_throwaway_99 ◴[] No.45105484{3}[source]
I think it's really objectionable to refer to this as an "SBF moment".

It's not just about surviving a downtown and unforseen circumstances with some luck (like the sibling talking about FedEx barely making it). Tesla, for example, was famously extremely close to bankruptcy.

But SBF got into the situation he was in due to his egregious fraud. The accounting at FTX was a criminal joke, with multiple sets of books, bypassable controls, outright fake numbers. My guess is that if SBF had survived that particular BTC downturn that his extreme hubris and willingness to commit fraud would have eventually done him in - downturns always happen at some point, and his brazenness in his criminal enterprise showed no signs of learning from mistakes.

Sure, all hugely successful companies have a ton of luck involved. But I think it's a mistake to pretend that SBF was just done in by bad timing, or that all companies do what he did. His empire collapse was pretty inevitable IMO if you look at what a clown show FTX was under the covers.

replies(3): >>45106075 #>>45106864 #>>45107690 #
55. jjangkke ◴[] No.45105485[source]
well i really hope they will use some of this money to compete with codex and release something quick

chat gpt 5 in codex is really good

so much that i stopped used claude code altogether

cheaper too

made me realize nobody has moat, coders especially will just go to whoever provides best bang for their buck.

replies(1): >>45107561 #
56. prasadjoglekar ◴[] No.45105489{3}[source]
Yup! Public markets are at all time highs. Other hard assets are also at all time highs. This sort of speculative investment only makes sense when nothing else is attractive.

And it's cash from asset managers. Its not 10Bn worth of compute time from Microsoft or Google.

replies(1): >>45107503 #
57. dgacmu ◴[] No.45105499{4}[source]
The job of the trustee of a bankrupt company is not to commit further fraud by gambling with the remaining funds.
58. j7ake ◴[] No.45105513[source]
Wait did I see “ Ontario Teachers' Pension Plan” as an investor?

Are they putting Canadian public funds into Anthropic?

replies(8): >>45105573 #>>45105577 #>>45105582 #>>45105629 #>>45105653 #>>45106047 #>>45108160 #>>45109568 #
59. lm28469 ◴[] No.45105528{3}[source]
It's still bleeding money with no profitability in sight, niche product or household name
replies(1): >>45105563 #
60. OhMeadhbh ◴[] No.45105534[source]
Is it just me or does something smell... bubbly in here?
61. throw310822 ◴[] No.45105535[source]
Just in case, can they be repurposed for bitcoin mining? :)

Edit: for the curious, no. An H100 costs about ~25k and produces $1.2/day mining bitcoin. Without factoring in electricity.

replies(2): >>45105701 #>>45105853 #
62. OhMeadhbh ◴[] No.45105548[source]
Money is impossible. Money is beautiful. Money is theft.

[Voted down by the cash cabal! Arise! Knowledge workers of the world, you have nothing to lose but your SPARE CHANGE!]

63. me551ah ◴[] No.45105549[source]
And distillation makes the compute moat irrelevant. You could spend trillions to train a model, but some companies is going to get enough data from your model and distill it's own at a much cheaper upfront cost. This would allow them to offer them for cheaper inference cost too, totally defeating the point of spending crazy money on training.
replies(1): >>45105769 #
64. code4tee ◴[] No.45105551[source]
Impressive round but it seems unlikely this game can go on much longer before something implodes. Given the amount of cash you need to set of fire to stay relevant it’s becoming nearly impossible for all but a few players to stay competitive, but those players have yet to demonstrate a viable business model.

With all these models converging, the big players aren’t demonstrating a real technical innovation moat. Everyone knows how to build these models now, it just takes a ton of cash to do it.

This whole thing is turning into an expensive race to the bottom. Cool tech, but bad business. A lot of VC folks gonna lose their shirt in this space.

replies(7): >>45105789 #>>45105933 #>>45105952 #>>45105968 #>>45106173 #>>45109023 #>>45115194 #
65. paulpauper ◴[] No.45105563{4}[source]
same again for Amazon, Tesla, Uber and others. Then they began making billions. Anthropic is not a niche anymore though. Same for Chat GPT.
replies(2): >>45105654 #>>45105840 #
66. aroman ◴[] No.45105567{4}[source]
I think you missed their joke :)
replies(1): >>45105639 #
67. sebzim4500 ◴[] No.45105573[source]
Weren't they also a significant investor in FTX?
replies(1): >>45106259 #
68. OhMeadhbh ◴[] No.45105577[source]
Yes.
69. ACCount37 ◴[] No.45105579[source]
If I had a dime for every time I see this kind of hot take, I'd be able to buy an H200 with that.

A man looks at economics. Understands nothing. Thinks it must be all fake and made up. He must be so smart for seeing it through!

replies(2): >>45105730 #>>45106425 #
70. datadrivenangel ◴[] No.45105582[source]
That's how investments these big get made: pension funds and other similar trusts need returns, and at a certain point if softbank says they have a way to deploy billions of dollars you don't have better options...
71. Zigurd ◴[] No.45105583[source]
Substitute fiber and routers for GPUs and this starts to look familiar.
replies(4): >>45105884 #>>45106028 #>>45106515 #>>45107108 #
72. j45 ◴[] No.45105586[source]
Very happy for them - curious if the funding will help with the current capacity issues.

5 minutes into my first opus prompt on Claude Code on an empty repo, I've already been warned by Claude Code that I'm about to hit my opus limit despite not using it in 12 days.

73. Symmetry ◴[] No.45105587{3}[source]
Proudly proclaiming on the Conversations With Tyler podcast that given a double or nothing bet with a 51% chance of success he'd keep playing forever.
replies(3): >>45105755 #>>45106147 #>>45108472 #
74. charcircuit ◴[] No.45105590{4}[source]
The valuation is not based solely on last year's revenue. Revenue doesn't really matter at this point.
replies(1): >>45106118 #
75. tzury ◴[] No.45105592[source]
When your product is 5x better than OpenAI, you can afford ~40% of their valuation, especially when you achieved it with simpler marketing strategies.
76. loeg ◴[] No.45105598{4}[source]
There are not.
77. scottLobster ◴[] No.45105604[source]
Roughly 1% of US GDP in 2025 was data center construction, mostly for AI.
78. y0eswddl ◴[] No.45105603{4}[source]
And that's not even looking at profits vs valuation...
79. datadrivenangel ◴[] No.45105612{3}[source]
It's both insane and not unreasonable. If Anthropic's internal version of Claude Code gets so good that they can recreate all of google's products quickly there's no moat anymore.

If AI is winner take all, then the value is effectively infinite. Obviously insane, but maybe it's winner take most?

replies(3): >>45105756 #>>45107149 #>>45108023 #
80. YetAnotherNick ◴[] No.45105616{4}[source]
> The company said its run-rate revenue has increased from around $1 billion at the beginning of 2025, to more than $5 billion in August.

So 10% of valuation for 1.5% of revenue, which grew 5x in last 6 months. Doesn't seem as unrealistic as you put it, if it has good gross margin which some expects to be 60%.

Also Google was valued at $350B when it had $5B revenue.[1]

[1]: https://companiesmarketcap.com/alphabet-google/marketcap/

81. mapt ◴[] No.45105618{3}[source]
Continuing to carve out economies of scale in battery + photovoltaic for another ten doublings has plenty of positive externalities.

The problem is that in the meantime, they're going to nuke our existing powergrid, created in the 1920's to 1950's to serve our population as it was in the 1970's, and for the most part not expanded since. All of the delta is in price-mediated "demand reduction" of existing users.

replies(1): >>45105761 #
82. jayd16 ◴[] No.45105619[source]
In this imaginary timeline where initial investments keep increasing this way, how long before we see a leak shutter a company? Once the model is out, no one would pay for it, right?
replies(6): >>45105704 #>>45105708 #>>45105778 #>>45105857 #>>45106040 #>>45112321 #
83. noleary ◴[] No.45105629[source]
Ontario Teachers' is a pretty active principal in venture/growth financings and a major LP to a bunch of funds. That said, venture/growth is a pretty small percentage of their holdings.

---

[1] https://www.crunchbase.com/organization/ontario-teachers-pen...

[2] https://www.otpp.com/en-ca/investments/our-investments/teach...

84. tdullien ◴[] No.45105630{4}[source]
It's just off by a factor of 35?
replies(1): >>45106153 #
85. ljlolel ◴[] No.45105636{3}[source]
The scaling laws already predict diminishing in returns
86. vincefutr23 ◴[] No.45105637[source]
why would nvidia not create their own foundational model?
replies(6): >>45105819 #>>45106081 #>>45106106 #>>45106160 #>>45107633 #>>45113775 #
87. saberience ◴[] No.45105639{5}[source]
Or the joke was so bad and non-obvious that their comment just reads like someone who has no idea what "post-money" actually means :)
88. DebtDeflation ◴[] No.45105641[source]
The wildest part is that the frontier models have a lifespan of 6 months or so. I don't see how it's sustainable to keep throwing this kind of money at training new models that will be obsolete in the blink of an eye. Unless you believe that AGI is truly just a few model generations away and once achieved it's game over for everyone but the winner. I don't.
replies(2): >>45105828 #>>45106723 #
89. manveerc ◴[] No.45105648[source]
Many SPVs were available for recent funding rounds, but my biggest gripe was the excessive fees layered on top of them.

More importantly, we should ask who will be left holding the bag when this bubble bursts. For now, investors are getting their money back through acquisitions. Founders with desirable, traditional credentials are doing well, as are early employees at large AI startups who are cashing out on the secondary market. It appears the late-stage employees will be the ones who lose the most.

90. ◴[] No.45105653[source]
91. lm28469 ◴[] No.45105654{5}[source]
Are they the exceptions or the rules, that's the question.
92. sgnelson ◴[] No.45105662{3}[source]
For me that brings up two questions:

1) Will I (and others) be able to get a H100 (or similar) when the bubble pops, and would that lead to new innovations from the GPU poor?

2) Will China take the lead in AI as they are less "capitalistic" with the demands for outsized returns on their investment compared to US companies, and they may be more willing to continue to sink money into AI despite possible market returns?

replies(1): >>45106700 #
93. bgwalter ◴[] No.45105666[source]
Interesting that investors pay so many billions for a product that just iterates until something, somehow compiles but emits subtle garbage.

Intellectual engagement goes down, users get dumber and only look at quantity. China is taking first steps to continue its excellence. In the New York Post of all places:

https://nypost.com/2025/08/19/world-news/china-restricts-ai-...

"It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."

94. cooloo ◴[] No.45105675[source]
Just a question of time until the bubble will burst.
95. AnimalMuppet ◴[] No.45105677[source]
One of my rules of thumb: When money is growing on trees, pick it.

That applies to individuals, but it probably also applies to companies. We're in an AI boom? Raise some money while it's easy.

replies(1): >>45109306 #
96. worldsayshi ◴[] No.45105679[source]
And we're still sort of on the fence if it's even that useful?

Like sure it saves me a bit of time here and there but will scaling up really solve the reliability issues that is the real bottleneck.

replies(3): >>45105767 #>>45110297 #>>45111335 #
97. nostrademons ◴[] No.45105683{4}[source]
I thought the same when choosing to invest in Intel rather than NVidia in 2022. At the time, Intel was worth $310B while NVidia was worth $650B, yet Intel's revenue was $80B/year while NVidia's was $25B. I was like "There's no way I'm paying 2x the price for 1/3 the revenue." Now, NVidia is worth $4T (a return of roughly 7x) on revenue of $165B, and Intel is worth $105B (a return of roughly -66%) on revenue of $53B.

Investors are forward looking, and market conditions can change abruptly. If Anthropic actually displaces Google, it's amazingly cheap at 10% of Alphabet's market cap. (Ironically, I even knew that NVidia was displacing Intel at the time I invested, but figured that the magnitude of the transition couldn't possibly be worth the price differential. News flash: companies can go to zero, and be completely replaced by others, and when that happens their market caps just swap.)

replies(1): >>45106341 #
98. nradov ◴[] No.45105685[source]
The high valuations are essentially lottery tickets, not something based on any sort of calculation of discounted future cashflows. The bet is that the researchers working for some of those frontier AI model companies will come up with innovations that give them a sustainable competitive advantage that goes beyond just purchasing more compute and licensing more proprietary training data. Obviously they can't all succeed but perhaps one or two will get lucky, perhaps by figuring out how to greatly improve efficiency or something that isn't easily copied.
99. matheist ◴[] No.45105687{4}[source]
You know, it only just now occurs to me to wonder if the blackjack story is the public sanitized version of "how I got $24k because I'm not allowed to tell you the real version"
replies(2): >>45105951 #>>45106018 #
100. stephencoyner ◴[] No.45105689[source]
Very interesting to see firms who already bet big on OpenAI (like Altimeter) on the list for this round. Anyone else remember when OpenAI told investors they couldn’t invest in competitors [1]?

[1]https://www.reuters.com/technology/openai-tells-investor-not...

101. renegade-otter ◴[] No.45105699{3}[source]
We do seem to be hitting the top of the curve of diminishing returns. Forget AGI - they need a performance breakthrough in order to stop shoveling money into this cash furnace.
replies(6): >>45105775 #>>45105790 #>>45105830 #>>45105936 #>>45105998 #>>45106035 #
102. krupan ◴[] No.45105701{3}[source]
Before your edit I was going to answer, sadly no, they can't even be repurposed for Bitcoin mining.
103. marcosdumay ◴[] No.45105704{3}[source]
In this imaginary reality where LLMs just keep getting better and better, all that a leak means is that you will eat-up your capital until you release your next generation. And you will want to release it very quickly either way, and should have a problem for a few months at most.

And if LLMs don't keep getting qualitatively more capable every few months, that means that all this investment won't pay off and people will soon just use some open weights for everything.

104. hnav ◴[] No.45105707{4}[source]
He went to jail because his autism wouldn't allow him to be duplicitous like a CEO doing evil things has to be and he attracted too much negative attention.
replies(1): >>45106373 #
105. jsheard ◴[] No.45105708{3}[source]
Whatever happens if/when a flagship model leaks, the legal fallout would be very funny to watch. Lawyers desperately trying to thread the needle such that training on libgen is fair use, but training on leaked weights warrants the death penalty.
106. hiddencost ◴[] No.45105716[source]
He did crimes. Whether or not money was lost, it was still crimes. B
107. marcosdumay ◴[] No.45105724{3}[source]
That's what wealth inequality does.
replies(2): >>45105804 #>>45105879 #
108. IshKebab ◴[] No.45105730{3}[source]
It is all fake and made up, and the numbers are detached from the real world, but it's not like the market doesn't know that.

Btw there's a decentish board game called Modern Art based around the pricing of art with no intrinsic value.

replies(2): >>45105836 #>>45107250 #
109. wongarsu ◴[] No.45105735{4}[source]
So all it takes is Anthropic 35x-ing their revenue once they start selling ad spots? That sounds pretty reasonable to me.

Right now nobody wants to be the first to offer advertising in LLM services, but LLM conversation history provides a wealth of data for ad targeting. And in more permissive jurisdictions you can have the LLM deliver ads organically in the conversation or just shift the opinions and biases of the model through a short mention in the system message

replies(3): >>45105899 #>>45105941 #>>45113747 #
110. docdeek ◴[] No.45105738[source]
> The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

For what it is worth, $13 billion is about the GDP of Somalia (about 150th in nomimal GDP) with a population of 15 million people.

replies(2): >>45106246 #>>45107079 #
111. matheist ◴[] No.45105744{4}[source]
Valuation includes expected future growth, it's not just present value of future revenue given today's numbers.

You may not agree with the market's estimation of that, but comparing just present revenue isn't really the right comparison.

replies(1): >>45106024 #
112. yieldcrv ◴[] No.45105746{3}[source]
Locally run video models that are just as good as today’s closed models are going to be the watershed moment

The companies doing foundational video models have stakeholders that don’t want to be associated with what people really want to generate

But they are pushing the space forward and the uncensored and unrestricted video model is coming

replies(3): >>45105817 #>>45105903 #>>45110285 #
113. AnimalMuppet ◴[] No.45105755{4}[source]
Not forever. He'd have nothing soon enough.
114. throw310822 ◴[] No.45105756{4}[source]
It's the techno-hubristic version of Pascal's wager. The reward for the existence of God is infinite, so it worth investing all the money in the world to create one.
115. UltraSane ◴[] No.45105761{4}[source]
A lot of the biggest data centers being built are also building behind the meter generation dedicated to them.
replies(1): >>45106503 #
116. lofaszvanitt ◴[] No.45105766[source]
Nvidia needs to grow.
117. bravetraveler ◴[] No.45105767{3}[source]
Assuming the best case: we're going to need to turn this productivity into houses or lifestyle improvement, soon... or I'm just going out with Sasquatch
replies(1): >>45107136 #
118. fredoliveira ◴[] No.45105769{3}[source]
A couple of counter-arguments:

Labs can just step up the way they track signs of prompts meant for model distillation. Distillation requires a fairly large number of prompt/response tuples, and I am quite certain that all of the main labs have the capability to detect and impede that type of use if they put their backs into it.

Distillation doesn't make the compute moat irrelevant. You can get good results from distillation, but (intuitively, maybe I'm wrong here because I haven't done evals on this myself) you can't beat the upstream model in performance. That means that most (albeit obviously not all) customers will simply gravitate toward the better performing model if the cost/token ratio is aligned for them.

Are there always going to be smaller labs? Sure, yes. Is the compute mote real, and does it matter? Absolutely.

replies(1): >>45107343 #
119. Zigurd ◴[] No.45105771{4}[source]
To make a similar comparison, Alphabet's Waymo has AV's that actually work. But they're not capturing 80% of Tesla's valuation.
replies(1): >>45108092 #
120. unsupp0rted ◴[] No.45105774[source]
Hopefully this'll give them another 3 months of runway, so they can go back to letting me use Claude Sonnet for 5 hours out of the 5-hour limit, rather than the 2.5 hours I'm getting now.

($100-plan, no agents, no mcp, one session at a time)

replies(1): >>45106053 #
121. jayde2767 ◴[] No.45105775{4}[source]
"cash furnace", so aptly put.
replies(2): >>45107976 #>>45112538 #
122. ACCount37 ◴[] No.45105777{3}[source]
The raw model scale is not increasing by much lately. AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available. Models that are much larger than the current frontier are still too expensive to train, and far too expensive to serve them en masse.

In the meanwhile, "better data", "better training methods" and "more training compute" are the main ways you can squeeze out more performance juice without increasing the scale. And there are obvious gains to be had there.

replies(2): >>45105841 #>>45106639 #
123. wmf ◴[] No.45105778{3}[source]
You can't run Claude on your PC; you need servers. Companies that have that kind of hardware are not going to touch a pirated model. And the next model will be out in a few months anyway.
replies(1): >>45106298 #
124. dcchambers ◴[] No.45105789[source]
And unfortunately, the amount of money being thrown around means that when the bottom falls out and its revealed that the emperor has no clothes, the implosion is going to impact all of us.

It's going to rock the market like we've never seen before.

replies(3): >>45105970 #>>45106114 #>>45108243 #
125. general1465 ◴[] No.45105790{4}[source]
Yep we do. There is a 1 year old video on YouTube, which describes this limitation https://www.youtube.com/watch?v=5eqRuVp65eY

Called efficient compute frontier

126. asveikau ◴[] No.45105797[source]
This sounds terrible for the environment.
127. yieldcrv ◴[] No.45105803[source]
So what should be exchanged for space inside a data center, what should be exchanged for the GPUs that they and everyone wants, what should be exchanged by the people that want to rent the GPUs before someone else

All of whom have a real world standardized thing to exchange for this already

Why do you think this discussion even needs to include the people who don’t have that standardized thing to exchange? If thats what you think

128. triceratops ◴[] No.45105804{4}[source]
+100
129. NetOpWibby ◴[] No.45105808[source]
I don't even know what this means.

What a fantastic amount of money flying around though, to support my inane queries to Claude.

130. giancarlostoro ◴[] No.45105817{4}[source]
Nobody wants to make a commercial NSFW model that then suffers a jailbreak... for what is the most illegal NSFW content.
replies(3): >>45106602 #>>45107433 #>>45109826 #
131. krupan ◴[] No.45105819[source]
Why didn't the people selling shovels to gold miners dig for gold themselves?

Because Nvidia is making actual profit selling hardware to those who do, not hoping for a big payout sometime in the future. Different risk/reward model, different goals.

132. jononor ◴[] No.45105828{3}[source]
It is being played like a winner-takes-it-all right now (it may or may not be such a market). So it is a game of being the one that is left standing, once the others fall off. In this kind of game, speeding more is done as a strategy to increase the chances of other competitors running out of cash or otherwise hitting a wall. Sustainability is the opposite of the goal being pursued... Whether one reaches "AGI" is not considered important either, as long as one can starve out most competitors.

And for the newcomers, the scale needs to be bigger than what the incumbents (Google and Microsoft) have as discretionary spending - which is at least a few billion per year. Because at that rate, those companies can sustain it forever and would be default winners. So I think yearly expenditure is going to be 20B year++

replies(2): >>45106214 #>>45106637 #
133. fredoliveira ◴[] No.45105830{4}[source]
I think that the performance unlock from ramping up RL (RLVR specifically) is not fully priced into the current generation yet. Could be wrong, and people closer to the metal will know better, but people I talk to still feel optimistic about the next couple of years.
134. triceratops ◴[] No.45105833{4}[source]
> hindsight

If the liquidators had perfect hindsight, they'd be trading their own money. Not cleaning up other people's messes.

Their job is to be responsible and follow procedure.

135. gmadsen ◴[] No.45105835{3}[source]
Its not clear to me that it needs to. If at the margins it can still provide an advantage in the market or national defense, then the spice must flow
replies(1): >>45106033 #
136. xpe ◴[] No.45105836{4}[source]
Perhaps there are salient differences between art on a wall and a company.
replies(1): >>45106665 #
137. Zigurd ◴[] No.45105840{5}[source]
That's a pretty random selection. Amazon makes money. Uber is clawed their way back from the pit of doom of not having a viable business model and being led by a jackass. Tesla is a meme stock. At best these examples tell us nothing.
138. xnx ◴[] No.45105841{4}[source]
> AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available.

Does this apply to Google that is using custom built TPUs while everyone else uses stock Nvidia?

replies(1): >>45106122 #
139. 2OEH8eoCRo0 ◴[] No.45105848[source]
A lot of moats are just money. Money to buy competition, capture regulation, buy exclusivity, etc.
140. wmf ◴[] No.45105853{3}[source]
There are other coins that are less unprofitable to mine (see https://whattomine.com/gpus ) but it's probably still not worth it.
141. willvarfar ◴[] No.45105855[source]
As humans don't actually work like LLMs do, we can surmise that there are far more efficient ways to get to AGI. We just need to find them.
replies(1): >>45106051 #
142. fredoliveira ◴[] No.45105857{3}[source]
> Once the model is out, no one would pay for it, right?

Well who does the inference at the scale we're talking about here? That's (a key part of) the moat.

143. xpe ◴[] No.45105863[source]
No disrespect to anyone in particular*, but I don’t care about one person’s armchair quarterback “feelings” about investment levels, bubbles, or <vague term that you won’t define>. Give me something I can learn from.

* I’m an equal opportunity critic of comments that are indistinguishable from people yelling into the void with whatever pops into their head. So yes, I’m extremely critical of this very human tendency that isn’t helpful.

replies(1): >>45113754 #
144. wagwang ◴[] No.45105879{4}[source]
No that's what low interest rates does
replies(2): >>45105962 #>>45106231 #
145. hnav ◴[] No.45105884[source]
Cisco?
146. ◴[] No.45105899{5}[source]
147. lynx97 ◴[] No.45105903{4}[source]
Maybe. The question is, will legislation be fast enough? Maybe, if people keep going for politician porn: https://www.theguardian.com/world/2025/aug/28/outrage-in-ita...
replies(1): >>45106260 #
148. senko ◴[] No.45105915[source]
> We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.

When you consider where most of that money ends up (Jensen &co), it's bizarre nobody can really challenge their monopoly - still.

149. xpe ◴[] No.45105933[source]
> Everyone knows how to build these models now, it just takes a ton of cash to do it.

This ignores differential quality, efficiency, partnerships, and lots more.

replies(1): >>45116883 #
150. duxup ◴[] No.45105936{4}[source]
>cash furnace

They don't even burn it on on AI all the time either: https://openai.com/sam-and-jony/

replies(2): >>45106462 #>>45107232 #
151. StopDisinfo910 ◴[] No.45105941{5}[source]
No, all it takes is Anthropic 35x-ing their revenue while Alphabet revenue somehow stays the same despite Alphabet already having a product perfectly competitive with Anthropic and which can use the same revenue growth strategy.

As I said, insane. And that’s not even considering the 10 to 15% shares of Anthropic actually owned by Alphabet.

152. askafriend ◴[] No.45105951{5}[source]
Great thought, that seems very likely since so many "founder stories" are heavily spun tales.
153. 1oooqooq ◴[] No.45105952[source]
you say it can't go much longer, yet herbalife is still listed.
154. vrt_ ◴[] No.45105953{3}[source]
Imagine solving energy as a side effect of this compute race. There's finally a reason for big money to be invested into energy infrastructure and innovation to solve a problem that can't be solved with traditional approaches.
replies(1): >>45106304 #
155. xbmcuser ◴[] No.45105960[source]
This is why I keep harping on the world needing China to get competitive on node size and crashing the market. They are already making energy with solar and renewable practically free. So the world needs AI to get out of the hand of the rich few and into the hands of everyone
156. JimmyBuckets ◴[] No.45105962{5}[source]
This comment seems like a rebuttal which is confusing to me because they are deeply related.
replies(1): >>45106764 #
157. derefr ◴[] No.45105963[source]
> privatization

You think any of these clusters large enough to be interesting, aren't authorized under a contractual obligation to run any/all submitted state military/intelligence workloads alongside their commercial workloads? And perhaps even to prioritize those state-submitted workloads, when tagged with flash priority, to the point of evicting their own workloads?

(This is, after all, the main reason that the US "Framework for Artificial Intelligence Diffusion" was created: America believed China would steal time on any private Chinese GPU cluster for Chinese military/intelligence purposes. Why would they believe that? Probably because it's what the US thought any reasonable actor would do, because it's what they were doing.)

These clusters might make private profits for private shareholders... but so do defense subcontractors.

158. rsanek ◴[] No.45105968[source]
I was convinced of this line of thinking for a while too but lately I'm not so sure. In software in particular, I think it's actually quite relevant what you can do in-house with a SOTA model (especially in the tool calling / fine tuning phase) that you just don't get with the same model via API. Think Cursor vs. Claude Code -- you can use the same model in Cursor, but the experience with CC is far and away better.

I think of it a bit like the Windows vs. macOS comparison. Obviously there will be many players that will build their own scaffolding around open or API-based models. But there is still a significant benefit to a single company being able to build both the model itself as well as the scaffolding and offering it as a unit.

replies(2): >>45107311 #>>45112697 #
159. jononor ◴[] No.45105970{3}[source]
Hope it stays long enough to build up serious electricity generation, storage and distribution. Cause that has a lot of productive uses, and has historically been underdeveloped (in favor of fossile fuels). Though there will likely be a squeeze before we get there...
replies(1): >>45109785 #
160. xpe ◴[] No.45105972[source]
Does someone care about this alternative speculative history? Why? If there was something called the sunk death fallacy, I would invoke it.
161. pembrook ◴[] No.45105974[source]
Before you pat yourself on the back for being so smart and grounded...

Remember, every technology you use today followed this pattern, with winners emerging that absolutely did go on to be extremely profitable for decades.

Most of us remember the .com era. But in the early 1900s there was literally hundreds of automotive startups (actual car companies, and tens of thousands of supplier startups) in the metro-detroit area: https://en.wikipedia.org/wiki/List_of_defunct_automobile_man...

Some of these went on to be absolutely fantastic investments, most didn't. All VCs and people who invest in venture know this pattern.

Everybody involved knows exactly the high risk level of the bets they are making. This is not "dumb" money detached from reality, and the pension funds with a 3% allocation to venture are going to be just fine if all these companies implode, this is just uncorrelated diversification for them. The point of these VC funds is to lose most of the time and win big very rarely.

There will be crashes, and more bubbles in the future. Humans will human. Everything is fine.

replies(4): >>45106010 #>>45106026 #>>45109729 #>>45113732 #
162. madduci ◴[] No.45105985[source]
And just now came the email with the changes to their terms of usage and policy.

Nice timing? I am sure they have scored a deal with the selling of personal data

163. anthem2025 ◴[] No.45105993{3}[source]
I doubt it.

Some will be used a lot will be written off and tossed away.

164. fidotron ◴[] No.45105996[source]
That's now between an entire Instagram and WhatsApp acquisition cost.

It's hard to escape the conclusion this is dumb money jumping on a bandwagon. To justify the expected returns here requires someone to make a transformer like leap again, and that doesn't take spending huge amounts in one place, but funding a lot more speculative thinkers.

replies(1): >>45106134 #
165. mikestorrent ◴[] No.45105998{4}[source]
Inference performance per watt is continuing to improve, so even if we hit the peak of what LLM technology can scale to, we'll see tokens per second, per dollar, and per watt continue to improve for a long time yet.

I don't think we're hitting peak of what LLMs can do, at all, yet. Raw performance for one-shot responses, maybe; but there's a ton of room to improve "frameworks of thought", which are what agents and other LLM based workflows are best conceptualized as.

The real question in my mind is whether we will continue to see really good open-source model releases for people to run on their own hardware, or if the companies will become increasingly proprietary as their revenue becomes more clearly tied up in selling inference as a service vs. raising massive amounts of money to pursue AGI.

replies(1): >>45107941 #
166. yunwal ◴[] No.45105999{3}[source]
> Things working out in the end doesn't make what he did not a crime at the time

Morally speaking, no. Practically speaking, it does. He would not have seen jail time.

replies(2): >>45106760 #>>45108937 #
167. 113 ◴[] No.45106010{3}[source]
Did you respond to the wrong comment?
168. Analemma_ ◴[] No.45106018{5}[source]
Las Vegas still had deep mafia ties in the 1970s so that’s very possible.
169. arduanika ◴[] No.45106019{3}[source]
CZ was a minor factor. Someone internal leaked the balance sheet.

Hindsight says, don't do fraud.

replies(2): >>45106497 #>>45108957 #
170. 999900000999 ◴[] No.45106023[source]
Is it worth it if I have an AI related idea to try and get it built ?

It'll take a solid year and about 30k.

Any chance of even talking to a VC as an outsider?

replies(2): >>45106136 #>>45106205 #
171. ◴[] No.45106024{5}[source]
172. fullshark ◴[] No.45106026{3}[source]
And they also realize they don't need to be fantastic investments to pay off, they just need to IPO/be acquired at a higher share price.
replies(1): >>45106253 #
173. 1oooqooq ◴[] No.45106028[source]
almost nobody remember the router craze.

people don't even remember the era before the current brands. like the time a bell offshoot almost crashed canada because they siphoned all the telephone money into bad routers.

replies(1): >>45111304 #
174. duxup ◴[] No.45106033{4}[source]
I suspect it needs to if it is going to cover the costs of training.
175. reissbaker ◴[] No.45106035{4}[source]
According to Dario, each model line has generally been profitable: i.e. $200MM to train a model that makes $1B in profit over its lifetime. But, since each model has been more and more expensive to train, they keep needing to raise more money to train the next generation of model, and the company balance sheet looks negative: i.e. they spent more this year than last (since the training cost for model N+1 is higher), and the model this year made less money this year than they spent (even if the model generation itself was profitable, model N isn't profitable enough to train model N+1 without raising — and spending — more money).

That's still a pretty good deal for an investor: if I give you $15B, you will probably make a lot more than $15B with it. But it does raise questions about when it will simply become infeasible to train the subsequent model generation due to the costs going up so much (even if, in all likelihood, that model would eventually turn a profit).

replies(8): >>45106645 #>>45106689 #>>45106988 #>>45107665 #>>45108456 #>>45110567 #>>45112144 #>>45112270 #
176. paganel ◴[] No.45106040{3}[source]
There’s the opportunity cost here of those resources (and not talking only about the money) not being spent on power generating that actually benefits the individual consumer.
177. xpe ◴[] No.45106041[source]
Compare with “I predict people are going to die.”

Clear, testable predictions are possible if you try.

replies(1): >>45113981 #
178. IshKebab ◴[] No.45106047[source]
They're a huge VC. Paid my wages for a few years.
179. ijidak ◴[] No.45106051{3}[source]
Can you elaborate? The technology to build a human brain would cost billions in today’s dollars. Are you thinking moreso about energy efficiency?
replies(2): >>45107341 #>>45110494 #
180. maqp ◴[] No.45106070[source]
>You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.

I really have to wonder, how long will it be before the competition moves into who has the most wafer-scale engines. I mean, surely the GPU is a more inefficient packaging form factor than large dies with on-board HBM, with a massive single block cooler?

replies(1): >>45106202 #
181. anthem2025 ◴[] No.45106072{3}[source]
Are you really trying to argue Tesla is fairly valued? In 2025?

When their sales have nosedived, new products have flopped, their CEO is the most disliked man in America, and their self driving still requires someone in the car at all times?

Tesla is a GameStop level meme stock.

182. arduanika ◴[] No.45106075{4}[source]
Correct. Companies go bust all the time, for market timing reasons that are mostly out of their control. But going bust is different from going bust and stealing billions.

Whether by negligence or intent, FTX was arranged so that they couldn't go bust without stealing.

183. 1oooqooq ◴[] No.45106080[source]
exactly. what are people who make these investments even betting on? it certainly is not revenue or dividends. so it can only be a bet the stock will go up faster than other less risky stocks.

and we continue to pretend that market generates any semblance of value.

replies(2): >>45109730 #>>45110403 #
184. wmf ◴[] No.45106081[source]
They have a bunch of Nemotron models. They can make more money from five(?) competing frontier labs than from trying to monopolize the frontier themselves.
replies(1): >>45113352 #
185. risyachka ◴[] No.45106096[source]
>> The compute moat is getting absolutely insane.

how so? deepseek and others do models on par with previous generation for a tiny fraction of a cost. Where is the moat?

186. ◴[] No.45106106[source]
187. nathan_douglas ◴[] No.45106114{3}[source]
It'd be an interesting time for China to invade Taiwan.
188. StopDisinfo910 ◴[] No.45106118{5}[source]
Anthropic competes solely in one of Alphabet multiple markets and that’s a market where Google already has a compelling competitive offer. This valuation gap doesn’t make any sense to me.
189. ACCount37 ◴[] No.45106122{5}[source]
By all accounts, what's in Google's racks right now (TPU v5e, v6e) is vaguely H100-adjacent, in both raw performance and supported model size.

If Google wants anything better than that? They, too, have to wait for the new hardware to arrive. Chips have a lead time - they may be your own designs, but you can't just wish them into existence.

replies(1): >>45106457 #
190. xpe ◴[] No.45106134[source]
I don’t like to think of predicting the future as “a conclusion” of some assumptions. I don’t think it puts you in a frame of mind such that you’re genuinely curious.
replies(1): >>45106248 #
191. makestuff ◴[] No.45106136[source]
Might as well try, the worst that can happen is they say no or ignore your email.
192. ◴[] No.45106138{4}[source]
193. arduanika ◴[] No.45106147{4}[source]
It is probably just a coincidence, but it's darkly funny how well this lines up with the strategy described in a rather infamous LessWrong post. The title is "Solutions to the Altruist's burden: the Quantum Billionaire Trick", but you probably know it by a different name. The author is one Roko Mijic.
replies(1): >>45115538 #
194. AlienRobot ◴[] No.45106150[source]
I saw a story posted on reddit that U.S. engineers went to China and said the U.S. would lose the A.I. game because THE ENERGY GRID was much worse than China's.

That's just pure insanity to me.

It's not even Internet speed or hardware. It's literally not having enough electricity. What is going on with the world...

replies(1): >>45108226 #
195. mrtesthah ◴[] No.45106151{4}[source]
If you know of other SBFs, please name them so that we can call for their investigation and prosecution.
replies(2): >>45107661 #>>45112822 #
196. nathan_douglas ◴[] No.45106153{5}[source]
A rounding error, really.
197. tinyhouse ◴[] No.45106157{4}[source]
You should check a few software companies that are publicly traded. Figma for example is at P/S ~38 multiple currently. Google at 6.8. If Anthropic would've done an IPO today they would probably be at ~100 given where Figma is.
198. jononor ◴[] No.45106160[source]
They are printing money right now, and their customers are taking all the risk. Just keep delivering and enjoy success.
199. bradley13 ◴[] No.45106165[source]
Throwing money and compute at AI strikes me as a very short-term solution. In the end, the human brain does not run off a nuclear power plant, not even when we are learning.

I expect the next breakthroughs to be all about efficiency. Granted, that could be tomorrow, or in 5 years, and the AI companies have to stay all at in the meantime.

replies(4): >>45106360 #>>45107202 #>>45110669 #>>45119488 #
200. tinyhouse ◴[] No.45106172{4}[source]
OK I will remind you how stupid your comment was when they reach $10B in revenue next year.
201. ijidak ◴[] No.45106173[source]
I think we underestimate the insane amount of idle cash the rich have. We know that the top 1% owns something like 80% of all resources, so they don't need that money.

They can afford to burn a good chunk of global wealth so that they can have even more global wealth.

Even at the current rates of insanity, the wealthy have spent a tiny fraction of their wealth on AI.

Bezos could put up this $13 billion himself and remain a top five richest man in the world.

(Remember Elon cost himself $40 billion because of a tweet and still was fine!)

This is a technology that could replace a sizable fraction of humamkind as a labor input.

I'm sure the rich can dig much deeper than this.

replies(1): >>45107268 #
202. arduanika ◴[] No.45106181{3}[source]
That's a pretty distorted view, and it's probably what people tell themselves right when they're about to do fraud.
203. mfro ◴[] No.45106202{3}[source]
Sentiment I have heard is manufactories do not want to increase die size because defects per die increases at the same time.
replies(2): >>45106540 #>>45112048 #
204. hsaliak ◴[] No.45106203[source]
Maybe these labs should consider funding specific models, and funneling returns back to investors from profits made with those models. Like the film industry.
205. red2awn ◴[] No.45106205[source]
If it only takes 30k can't you bootstrap and built it yourself? Or even just work on it on the side alongside your day job.
replies(1): >>45111154 #
206. derefr ◴[] No.45106211{3}[source]
> Anecdotally moving from model to model I'm not seeing huge changes in many use cases.

Probably because you're doing things that are hitting mostly the "well-established" behaviors of these models — the ones that have been stable for at least a full model-generation now, that the AI bigcorps are currently happy keeping stable (since they achieved 100% on some previous benchmark for those behaviors, and changing them now would be a regression per those benchmarks.)

Meanwhile, the AI bigcorps are focusing on extending these models' capabilities at the edge/frontier, to get them to do things they can't currently do. (Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model": ever-better domain-specific analysis capabilities, to "logic out" whether training data belongs in the training corpus for some fine-tune; and domain-specific synthesis capabilities, to procedurally generate unbounded amounts of useful fine-tuning corpus for specific tasks, ala AlphaZero playing unbounded amounts of Go games against itself to learn on.)

This means that the models are getting constantly bigger. And this is unsustainable. So, obviously, the goal here is to go through this as a transitionary bootstrap phase, to reach some goal that allows the size of the models to be reduced.

IMHO these models will mostly stay stable-looking for their established consumer-facing use-cases, while slowly expanding TAM "in the background" into new domain-specific use-cases (e.g. constructing novel math proofs in iterative cooperation with a prover) — until eventually, the sum of those added domain-specific capabilities will turn out to have all along doubled as a toolkit these companies were slowly building to "use models to analyze models" — allowing the AI bigcorps to apply models to the task of optimizing models down to something that run with positive-margin OpEx on whatever hardware that would be available at that time 5+ years down the line.

And then we'll see them turn to genuinely improving the model behavior for consumer use-cases again; because only at that point will they genuinely be making money by scaling consumer usage — rather than treating consumer usage purely as a marketing loss-leader paid for by the professional usage + ongoing capital investment that that consumer usage inspires.

replies(3): >>45106382 #>>45106411 #>>45111625 #
207. leptons ◴[] No.45106214{4}[source]
It's the Uber business plan - losing money until the competition loses more and goes out of business. So far Lyft seems to be doing okay, which proves the business plan doesn't really work.
replies(3): >>45106467 #>>45106855 #>>45107453 #
208. Printerisreal ◴[] No.45106231{5}[source]
No that's what PRINTING fiat money does. Low or high interest rates, they print $trillions
replies(3): >>45106493 #>>45106757 #>>45106825 #
209. Aeolun ◴[] No.45106246{3}[source]
As a fun comparison, because I saw the population is more or less the same.

The GDP of the Netherlands is about $1.2 trillion with a population of 18 million people.

I understand that that’s not quite what’s meant with ‘small country’ but in both population and size it doesn’t necessarily seem accurate.

210. fidotron ◴[] No.45106248{3}[source]
> Remember the YouTube acquisition? To many, it seemed bonkers.

Because of the legal uncertainty about what they were doing. There was no fundamental technological impediment.

Here the technology simply doesn't exist and this is a giant bet that it can be magically created by throwing (a lot) more money at the existing idea. This is why it's "dumb money" because they don't seem to understand the dynamics of what they're investing in.

replies(2): >>45106564 #>>45110430 #
211. farfolomew ◴[] No.45106251[source]
It’s about time the western world finally changes from a five to four-day work week!

That’s just about the most tangible benefit I see this AI breakthrough delivering. What an asset to have too, socially and civilly, especially when compared to the west’s primary adversary: the CCCP and its communist message of ‘equality’ for the people when they’re still working six days a week!

212. delfinom ◴[] No.45106253{4}[source]
RIP our 401ks that will end up being the bagholders when its dumped on the market.
replies(1): >>45106420 #
213. FireBeyond ◴[] No.45106254{4}[source]
What this version of the FedEx story doesn't mention is that Fred was already stiffing his pilots on their salaries. Taking the last money in the company and deciding that the best use for it was the blackjack table in Vegas and not paying his employees ... worked well, but it was a gamble, let's be clear, not a calculated decision - like you say, not the decision of a "great genius". It goes a different way, and you have "FedEx founder decides to go gambling, leaving his employees without paychecks".
replies(1): >>45109697 #
214. AlienRobot ◴[] No.45106256{3}[source]
Step 1: burn billions of dollars.

Step 2: achieve AGI.

Step 3: ?

Step 4: transcend money.

215. arduanika ◴[] No.45106259{3}[source]
Yes. So they indirectly owned some Anthropic through the FTX bankruptcy. I kinda wonder whether they somehow opted to keep their Anthropic stake when the FTX estate sold. Or maybe they bought it at some other time.
replies(1): >>45106328 #
216. kaashif ◴[] No.45106260{5}[source]
Well considering it has been possible to produce similar doctored images for decades at this point, I think we can conclude legislation has not been fast enough.

That article is nothing to do with AI, really.

replies(1): >>45107631 #
217. sidewndr46 ◴[] No.45106272[source]
I'm not an expert at how private investment rounds work, but aren't most "raises" of AI companies just huge commitments of compute capacity? Either pre-existing or build-out.
replies(1): >>45107400 #
218. d_burfoot ◴[] No.45106284[source]
There's a big issue with a lot of thinking about these valuations, which is that LLM inference does not need the 5-nines of uptime guarantees that cloud datacenters provide. You are going to see small business investors around the world pursue the following model:

- Buy an old warehouse and a bunch of GPUs

- Hire your local tech dude to set up the machines and install some open-source LLMs

- Connect your machines to a routing service that matches customers who want LLM inference with providers

If the service goes down for a day, the owner just loses a day's worth of income, nobody else cares (it's not like customers are going to be screaming at you to find their data). This kind of passive, turn-key business is a dream for many investors. Comparable passive investments like car washes, real estate, laundromats, self-storage, etc are messier.

replies(2): >>45106866 #>>45112696 #
219. Razengan ◴[] No.45106285[source]
Barely 50 years ago computers used to cost a million dollars and were less powerful than your phone's SIM card.

> GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+?

Your brain? Basically free *(not counting time + food)

Disruption in this space will come from whomever can replicate analog neurons in a better way.

Maybe one day you'll be able to Matrix information directly into your brain and know kung-fu in an instant. Maybe we'll even have a Mentat social class.

replies(2): >>45107367 #>>45107617 #
220. xpe ◴[] No.45106287[source]
Remember the YouTube acquisition? Many probably don’t since it was 2006. $1.65B. To many, it seemed bonkers.

Narrow point: In general, one person’s impression of what is crazy does not fare well against market-generated information.

Broader point: If you think you know more than the market, all other things equal, you’re probably wrong.

Lesson: Only searching for reasons why you are right is a fishing expedition.

If the investment levels are irrational, to what degree are they? How and why? How will it play out specifically? Predicting these accurately is hard.

replies(5): >>45106974 #>>45107176 #>>45107369 #>>45109732 #>>45110507 #
221. cellis ◴[] No.45106292[source]
He'll get a pardon next election cycle.
222. sidewndr46 ◴[] No.45106294[source]
I'm pretty sure giving yourself a 1 billion dollar loan had something to do with his downfall. Not a failure to 'survive the downturn'.
223. jayd16 ◴[] No.45106298{4}[source]
If it was worth it, you'd see some easy self hostable package, no? And by definition, its profitable to self host or these AI companies are in trouble.
replies(3): >>45107146 #>>45107372 #>>45109892 #
224. bobsmooth ◴[] No.45106304{4}[source]
I would trade the destruction of trustworthy information and images on the internet for clean fusion power. It's a steep cost but I think it's worth it.
225. sidewndr46 ◴[] No.45106328{4}[source]
was FTX actually liquidated? Last I read the lawyers were just busy paying themselves $500,000 a day
replies(1): >>45111143 #
226. edm0nd ◴[] No.45106332{4}[source]
its some GPT wrapper app that has 100 downloads.

also if your founder has to use dozens of buzzwords when asked to describe what their app does and that still doesn't even explain it, its obviously just bs.

"Arcarae’s mission is to help humanity remember and unlock the power each individual holds within themself so they can bring into reality their unique, authentic expression of self without fear or compromise.

Our research endeavors are designed to support this mission via computationally modeling higher-order cognition and subjective internal world models."

lol

replies(1): >>45106562 #
227. Printerisreal ◴[] No.45106341{5}[source]
Investors are forward looking, except when it's micron in 2000.

Anthropic have several similiar competitors with actual real distribution and tech. Ones that can go 10x are underdogs like Google before IPO or Amazon, or Shopify etc. Anthropic current stock is beyond that. Investors no longer give any big opp. to public. They gain it via private funding

228. 1970-01-01 ◴[] No.45106358[source]
Why are they bothering with billions of dollars when crypto coins already delivered, right on schedule, the new foundation of global currency? Why aren't these previous investors pouring all of their BTC into Anthropic as fast as possible? Isn't $183,000,000,000 a massive signal that this next leap for Silicon Valley will be as solid as their previous revolution?
229. ryukoposting ◴[] No.45106360[source]
This is roughly where I am on the matter. If the energy costs stay massive, your investment in AI is really just a bet that energy production will get cheaper. If the energy costs fall, so does the moat that keeps valuations like this one afloat.

If there's a step-function breakthrough in efficiency, it's far more likely to be on the model side than on the semiconductor side. Even then, investing in the model companies only makes sense if you think one of them is going to be able to keep that innovation within their walls. Otherwise, you run into the same moat-draining problem.

replies(1): >>45114488 #
230. darepublic ◴[] No.45106364{3}[source]
I hope you're right.
231. ◴[] No.45106365[source]
232. wslh ◴[] No.45106367{3}[source]
> Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...

Model specialization. For example a model with legal knowledge based on [private] sources not used until now.

233. bpodgursky ◴[] No.45106368{3}[source]
Yes but investors being whole and profitable would almost certainly have not resulted in jail time. He probably would have even had enough unquestionably personal returns to pay back any misappropriated funds in a negotiated settlement, if they even come to light at all.
234. toomuchtodo ◴[] No.45106373{5}[source]
Also true.
235. kdmtctl ◴[] No.45106382{4}[source]
You have just described a singularity point for this line of business. Which could happen. Or not.
replies(1): >>45106674 #
236. oytis ◴[] No.45106388[source]
Series what?
replies(1): >>45112567 #
237. Workaccount2 ◴[] No.45106411{4}[source]
>Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model"

Last week I put GPT-5 and Gemini 2.5 in a conversation with each other about a topic of GPT-5's choosing. What did it pick?

Improving LLMs.

The conversation was far over my head, but the two seemed to be readily able to get deep into the weeds on it.

I took it as a pretty strong signal that they have an extensive training set of transformer/LLM tech.

replies(1): >>45107117 #
238. pembrook ◴[] No.45106420{5}[source]
Your 401k is going to be fine, nobody goes public anymore until they're a big, dumb, boring, profitable company and all of the risk+returns have been rung out.

Too many normies betting their life savings without understanding this risk in prior bubbles, so we regulated away the ability for non-institutional investors to take venture risk at all.

replies(1): >>45106818 #
239. eatsyourtacos ◴[] No.45106425{3}[source]
Economics is entirely made up. It's a social science.
replies(1): >>45106508 #
240. xxpor ◴[] No.45106457{6}[source]
Aren't chips + memory constrained by process + reticle size? And therefore, how much HBM you can stuff around the compute chip? I'd expect everyone to more or less support the same model size at the same time because of this, without a very fundamentally different architecture.
241. dmbche ◴[] No.45106462{5}[source]
"May 21, 2025

This is an extraordinary moment.

Computers are now seeing, thinking and understanding.

Despite this unprecedented capability, our experience remains shaped by traditional products and interfaces."

I don't even want to learn about them every line is so exhausting

replies(1): >>45106604 #
242. dvfjsdhgfv ◴[] No.45106463{3}[source]
> I can just pick an older model and often I can't tell the difference...

Or, as in the case of a leading North American LLM provider, I would love to be able to choose an older model but it chooses it for me instead.

243. Workaccount2 ◴[] No.45106467{5}[source]
There are endless examples of that business model working...
replies(1): >>45112318 #
244. arcticbull ◴[] No.45106493{6}[source]
Who's "they"?
replies(1): >>45106748 #
245. Aeolun ◴[] No.45106497{4}[source]
Or hide it better?
246. Workaccount2 ◴[] No.45106503{5}[source]
Which is mostly natural gas sadly.
replies(1): >>45110230 #
247. ACCount37 ◴[] No.45106508{4}[source]
In case of economics, the gap between "social science" and "entirely made up" is ten miles long and filled with hellfire.

The laws of economics have the kind of inevitability you expect from the laws of physics. Disrespect them at your own peril.

replies(1): >>45109224 #
248. csomar ◴[] No.45106511{4}[source]
Here is another way to look at it: Anthropic is a put option on Google worth 10% of Google price. Expires when they run out of funds.
249. Zigurd ◴[] No.45106515[source]
I am old enough to have had the pleasure of Atiq Raza telling me the thing I was helping pitch couldn't be sold to Avaya (or was it Cisco?) in four months for $1 billion and so is not interesting, within the first four minutes of the meeting. Evidently he was seeing enough pitches for things he could sell at that price and in that time.

Now he's in AI investments.

replies(1): >>45117717 #
250. Workaccount2 ◴[] No.45106540{4}[source]
Meanwhile at Cerebras...heh

But I do believe that their cost per compute is still far more than disparate chips.

251. koakuma-chan ◴[] No.45106562{5}[source]
> lol

What do you mean lol? Isn't that awesome? Feel free to share if you think that isn't awesome. I personally don't think there is enough information here to tell if that is awesome or satire, but it is interesting how usually things like this are considered awesome, but this particular one is deemed satire.

replies(1): >>45107397 #
252. xpe ◴[] No.45106564{4}[source]
Update: I edited my comment to focus on the mindset of making predictions (including recognizing the uncertainty and being comprehensive about possible scenarios)

I made a new top-level comment mentioning the 2006 YouTube acquisition only to show that many people were shocked, but -surprise- markets are usually better predictors than individual hunches.

replies(1): >>45106822 #
253. yieldcrv ◴[] No.45106602{5}[source]
Thats the thing, what’s “illegal” will challenge our whole society when it comes do dynamically generated real interactive avatars that are new humans

When it comes to sexually explicit content in general with adults, all of our laws rely on the human actor existing

FOSTA and SESTA is related to user generated content of humans, for example. They rely on making sure an actual human isnt being exploited and burdening everyone with that enforcement. When everyone can just say “thats AI” nobody’s going to care and platforms will be willing to take that risk of it being true again - or a new hit platform will. That kind of content currently Doesnt exist in large quantities yet, until a video model ungimped can generate it.

Concerns about trafficking only rely on actual humans not entirely new avatars

regarding children there are more restrictions that may already cover this, there is a large market for just adult looking characters though and worries about underage can be tackled independently. or be found entirely futile. not my problem, focus on what you can control. this is whats coming though.

people already dont mind parasocial relationships with generative AI and already pay for that, just add nudity

254. duxup ◴[] No.45106604{6}[source]
Agreed, that whole page is brutal to read.
255. sdesol ◴[] No.45106637{4}[source]
> So it is a game of being the one that is left standing

Or the last investor. When this type of money is raised, you can be sure the earlier investors are looking for ways to have a soft landing.

replies(1): >>45116529 #
256. robwwilliams ◴[] No.45106639{4}[source]
The jump to 1 million token length context for Sonnet 4 plus access to internet has been a game-changer for me. And somebody should remind Anthropic leadership to at least mirror Wikipedia; better yet support Wikipedia actively.

All of the big AI players have profited from Wikipedia, but have they given anything back, or are they just parasites on FOSS and free data?

257. viscanti ◴[] No.45106645{5}[source]
Well how much of it is correlation vs causation. Does the next generation of model unlock another 10x usage? Or was Claude 3 "good enough" that it got traction from early adopters and Claude 4 is "good enough" that it's getting a lot of mid/late adopters using it for this generation? Presumably competitors get better and at cheaper prices (Anthropic charges a premium per token currently) as well.
258. Workaccount2 ◴[] No.45106665{5}[source]
At heart, not really. The whole point of all of this is to motivate humans to get off their butt and reduce entropy.
replies(2): >>45107224 #>>45107926 #
259. derefr ◴[] No.45106674{5}[source]
I wouldn't describe it as a singularity point. I don't mean that they'll get models to design better model architectures, or come up with feature improvements for the inference/training host frameworks, etc.

Instead, I mean that these later-generation models will be able to be fine-tuned to do things like e.g. recognizing and discretizing "feature circuits" out of the larger model NN into algorithms, such that humans can then simplify these algorithms (representing the fuzzy / incomplete understanding a model learned of a regular digital-logic algorithm) into regular code; expose this code as primitives/intrinsics the inference kernel has access to (e.g. by having output vectors where every odd position represents a primitive operation to be applied before the next attention pass, and every even position represents a parameter for the preceding operation to take); cut out the original circuits recognized by the discretization model, substituting simple layer passthrough with calls to these operations; continue training from there, to collect new, higher-level circuits that use these operations; extract + burn in + reference those; and so on; and then, after some amount of this, go back and re-train the model from the beginning with all these gained operations already being available from the start, "for effect."

Note that human ingenuity is still required at several places in this loop; you can't make a model do this kind of recursive accelerator derivation to itself without any cross-checking, and still expect to get a good result out the other end. (You could, if you could take the accumulated intuition and experience of an ISA designer that guides them to pick the set of CISC instructions to actually increase FLOPS-per-watt rather than just "pushing food around on the plate" — but long explanations or arguments about ISA design, aren't the type of thing that makes it onto the public Internet; and even if they did, there just aren't enough ISAs that have ever been designed for a brute-force learner like an LLM to actually learn any lessons from such discussions. You'd need a type of agent that can make good inferences from far less training data — which is, for now, a human.)

260. huevosabio ◴[] No.45106679[source]
Instead of enriching uranium we're enriching weights!
261. dom96 ◴[] No.45106689{5}[source]
> if I give you $15B, you will probably make a lot more than $15B with it

"probably" is the key word here, this feels like a ponzi scheme to me. What happens when the next model isn't a big enough jump over the last one to repay the investment?

It seems like this already happened with GPT-5. They've hit a wall, so how can they be confident enough to invest ever more money into this?

replies(1): >>45107077 #
262. ◴[] No.45106700{4}[source]
263. dkobia ◴[] No.45106710[source]
AI investment is headed toward 2% of the US GDP, getting close to the Apollo program and 10 times the manhattan project. Almost 15% of the US stock market is tied up in these investments so most of us have skin in this game whether we like it or not, for better or worse.
264. yabones ◴[] No.45106719{3}[source]
You can take decades old fibre, stick some new transceivers on the ends, and have it run at the very latest speeds (unless it's cheap, damaged, etc) without having to pull it out and reinstall it.

H100s will not age this well. It's not like owning old railroad tracks, it's like owning a fleet of 1992 Ford Taurus's. They'll be quickly obsolete and uneconomical in just a few years as semiconductor manufacturing continues to improve.

265. solomonb ◴[] No.45106723{3}[source]
They are only getting deprecated this fast because the cost of training is in some sense sustainable. Once it is not, then they will no longer be deprecated so fast.
replies(1): >>45113888 #
266. Printerisreal ◴[] No.45106748{7}[source]
Governments, CBs and investment banks. "They" do it and work together to print more.
replies(1): >>45106858 #
267. wagwang ◴[] No.45106757{6}[source]
Every dollar that's printed gets multiplied based on the interest rate
268. ramesh31 ◴[] No.45106760{4}[source]
>Morally speaking, no. Practically speaking, it does. He would not have seen jail time.

It's literally exactly what Shkreli got 7 years for, even after repaying investors. If you defraud money from someone and put it back before they find out, it's still a crime. Fraud is about intent more than anything else, and they proved it for SBF.

replies(1): >>45106973 #
269. wagwang ◴[] No.45106764{6}[source]
Maybe, but interest rates among other bad banking practices is how we got here in the first place.
270. otterley ◴[] No.45106818{6}[source]
> we regulated away the ability for non-institutional investors to take venture risk at all.

Some institutions try to achieve this by launching their own cryptocurrencies, but by and large, the market isn't biting.

271. fidotron ◴[] No.45106822{5}[source]
This isn't a market in that sense though - it's very much one sided what Anthropic tells us and they are privately traded.

It is very far from a situation where the price discovery mechanism is allowed to work.

replies(1): >>45107153 #
272. ◴[] No.45106825{6}[source]
273. puchatek ◴[] No.45106851[source]
And how much will one query cost you once the companies start to try and make this stuff profitable?
274. jononor ◴[] No.45106855{5}[source]
Uber market cap makes places it in the top100 in the world, whereas Lyft is around 1/25th of Uber in market cap, and not even top1000. I would consider that a success... That is basically as much winner-takes-it-all one can realistically get in a global market. Cases where the top is just 5x the runner up would still be very winner oriented.
replies(1): >>45107676 #
275. arcticbull ◴[] No.45106858{8}[source]
In a centrally banked economy, retail and commercial banks create money when you take out loans. The government doesn't create money except during QE which only happened twice in the US, 2009-2014 and 2020-2021. That's why I was curious what you meant by "they." The Fed has been actively destroying money for the last 4 years.
replies(3): >>45106985 #>>45107050 #>>45107919 #
276. FinnLobsien ◴[] No.45106864{4}[source]
But that’s precisely what I mean. How many companies had similarly sketchy situations, cleaned up their act and nobody ever noticed?

That number isn’t 0

277. matt3D ◴[] No.45106866[source]
I use OpenAI's batch mode for about 80% of my AI work at the moment, and one of the upsides is it reduces the frantic side of my AI work. When the response is immediate I feel like I can't catch a break.

I think once the sheen of Microsoft Copilot and the like wear off and people realise LLMs are really good at creating deterministic tools but not very good at being one, not only will the volume of LLM usage decline, but the urgency will too.

replies(1): >>45109755 #
278. ericmcer ◴[] No.45106897[source]
Could they vastly reduce this cost by specializing models? Like is a general know everything model exponentially more expensive than one that deeply understands a single topic (like programming, construction, astrophysics, whatever)?

Is there room for a smaller team to beat Anthropic/OpenAI/etc. at a single subject matter?

279. xyst ◴[] No.45106931[source]
The only people this matters to is the initial investors in earlier series or seed fund stages.
replies(1): >>45112560 #
280. tpurves ◴[] No.45106936[source]
And 75% of that just gets shipped right over to nVidia as pure profit. The mind boggles at the macro-economic inefficiency of that situation.
281. rich_sasha ◴[] No.45106940[source]
It's the SV playbook: invent a field, make it indispensable, monopolise it and profit.

It still amazes me that Uber, a taxi company, is worth however many billions.

I guess for the bet to work out, it kinda needs to end in AGI for the costs to be worth it. LLMs are amazing but I'm not sure they justify the astronomical training capex, other than as a stepping stone.

replies(2): >>45107357 #>>45107481 #
282. yunwal ◴[] No.45106973{5}[source]
Right, but that’s because Shkreli openly admitted to it on the internet
283. xyst ◴[] No.45106974[source]
Somebody didn’t get the memo from MIT…
284. Printerisreal ◴[] No.45106985{9}[source]
Now explain why government raise the debt limit? other than allowing printing to get fiat money?
replies(1): >>45107072 #
285. mandevil ◴[] No.45106988{5}[source]
I mean, this is how semiconductors have worked forever. Every new generation of fab costs ~2x what the previous generation did, and you need to build a new fab ever couple of years. But (if you could keep the order book full for the fab) it would make a lot of money over its lifetime, and you still needed to borrow/raise even more to build the next generation of fab. And if you were wrong about demand .... you got into a really big bust, which is also characteristic of the semiconductor industry.

This was the power of Moore's Law, it gave the semiconductor engineers an argument they could use to convince the money-guys to let them raise the capital to build the next fab- see, it's right here in this chart, it says that if we don't do it our competitors will, because this chart shows that it is inevitable. Moore's Law had more of a financial impact than a technological one.

And now we're down to a point where only TSMC is for sure going through with the next fab (as a rough estimate of cost, think 40 billion dollars)- Samsung and Intel are both hemming and hawing and trying to get others to go in with them, because that is an awful lot of money to get the next frontier node. Is Apple (and Nvidia, AMZ, Google, etc.) willing to pay the costs (in delivery delays, higher costs, etc.) to continue to have a second potential supplier around or just bite the bullet and commit to TSMC being the only company that can build a frontier node?

And even if they can make it to the next node (1.4nm/14A), can they get to the one after that?

The implication for AI models is that they can end up like Intel (or AMD, selling off their fab) if they misstep badly enough on one or two nodes in a row. This was the real threat of Deepseek: if they could get frontier models for an order of magnitude cheaper, then the entire economics of this doesn't work. If they can't keep up, then the economics of it might, so long as people are willing to pay more for the value produced by the new models.

replies(1): >>45108069 #
286. ericmcer ◴[] No.45106999[source]
Maybe this is a roundabout weird benefit to income inequality... Like the banks and private equity have so much cash burning that they start taking increasingly risky moonshots that result in actual innovative projects. Normally projects like this would require the government to spearhead, but now there is so much cash floating around they can just throw 13B at a totally unprofitable high risk company.
287. wagwang ◴[] No.45107050{9}[source]
The amount of money banks create is determined by the appetite for credit which is determined by the interest rate. The fed has not been actively destroying money, they are at most slowing the rate of the increase of money.
replies(1): >>45107111 #
288. arcticbull ◴[] No.45107072{10}[source]
Ah yeah, that's a common misconception.

Deficit spending doesn't create new money. Deficit spending borrows existing money from the population and institutions in exchange for a promise of future government revenues. The Fed does not participate in treasury primary auctions and does not monetize the debt as a means of funding government operations.

If you printed new money to pay for the government, you wouldn't have a debt. That's double-counting. Not to mention the debt is twice as large as the entire money supply so what you're suggesting isn't even physically possible. It would be inflationary to simply print new money to finance spending, which is exactly why it's not done.

[edit] Also the debt limit is a stupid concept that's likely unconstitutional. Congress authorizes spending, meaningful debate over paying for it by adjusting the debt limit likely falls afoul of the 14th amendment's public debt clause. But yeah I mean the debt limit goes up because the government spends more money than it takes in, so it needs to borrow more each year.

289. bcrosby95 ◴[] No.45107077{6}[source]
I think you're really bending over backwards to make this company seem non viable.

If model training has truly turned out to be profitable at the end of each cycle, then this company is going to make money hand over fist, and investing money to out compete the competition is the right thing to do.

Most mega corps started out wildly unprofitable due to investing into the core business... until they aren't. It's almost as if people forget the days of Facebook being seen as continually unprofitable. This is how basically all huge tech companies you know today started.

replies(3): >>45107188 #>>45109776 #>>45113886 #
290. Aurornis ◴[] No.45107079{3}[source]
Country scale is weird because it has such a large range.

California (where Anthropic is headquartered) has over twice as many people as all of Somalia.

The state of California has a GDP of $4.1 Trillion. $13 billion is a rounding error at that scale.

Even the San Francisco Bay Area alone has around half as many people as Somalia.

291. matthewdgreen ◴[] No.45107085[source]
What’s the hardware capability doubling rate for GPUs in clusters? Or (since I know that’s complicated to answer for dozens of reasons): on average how many months has it been taking for the hardware cost of training the previous generation of models to halve, excluding algorithmic improvements?
292. teepo ◴[] No.45107108[source]
Really good analogy: Bay Networks, Lucent, Nortel, and Cisco got beat up or destroyed on the equipment side. And then the long haul fiber companies never got ROI (but paved the way for broadband).
293. arcticbull ◴[] No.45107111{10}[source]
They influence creation of money by adjusting the short-term interest rate which influences the demand for borrowing at commercial and retail banks. It's not that direct or straight-forward though, because they only have control over the short end of the yield curve not the long end. The long end of the yield curve has interest rates defined mostly by inflation expectations. If they dropped rates to 0% overnight it probably wouldn't move the 30Y yield all that much -- it might even raise it because of the expectation lower short-end yields would raise inflation.

The Fed doesn't have nearly as much control as folks think.

The Fed directly created money during QE and they are directly destroying it during QT. There's a net add, but that's mostly because the economy is growing, which creates new demand for money as expressed by demand for debt.

The money supply staying fixed or shrinking is a non-goal anyways. It's irrelevant. What matters is inflation as measured from the change in actual prices.

replies(1): >>45113966 #
294. temp0826 ◴[] No.45107117{5}[source]
Like trying to have a lunch conversation with coworkers about anything other than work
295. worldsayshi ◴[] No.45107136{4}[source]
While decoding your comment I'm going to assume Sasquatch to be a semi-underground (no web site, only calls) un-startup that specializes in survival kits for people leaving civilization behind. Like calling the vacuum repair store but more hippie themed.
replies(1): >>45107181 #
296. didip ◴[] No.45107143[source]
Everyone is so pessimistic about bubble bursting and money are simply catches on fire in this AI race…

However, I remembered when Youtube was young. It was burning money every month on bandwidth.

After selling out to Google, it took another decade to turned profit. But it did. And it achieved its end game. As the winner, it took all of the video hosting market. And Google reaped the entirety of that win.

This AI race is playing out the same way. The winner has the ability to disrupt several FAANGs and FAANG neighbors (eg. Adobe). And that’s 1-2 trillion dollar market, combined.

replies(3): >>45107288 #>>45110286 #>>45112511 #
297. quotemstr ◴[] No.45107146{5}[source]
Does your "self hostable package" come with its own electric substation?
replies(1): >>45108582 #
298. SirMaster ◴[] No.45107149{4}[source]
Is there no moat for previous account and user buy-in?

Convincing billions of users to make a new account and do all their e-mail on a new domain? A new YouTube channel with all new subscribers? Migrate all their google drive and AdSense accounts to another company, etc?

This is trivially simple and creates no moat?

299. xpe ◴[] No.45107153{6}[source]
Here are some ways that it’s not very far from a market mechanism:

1. How much an organization is willing to invest in X competes against other market opportunities.

2. The effective price per share (as part of the latest round of financing) is an implicit negotiation.

It is a matter of degree, sure, but my point still stands: there is a lot of collective information going into this valuation. So an individual should be intellectually humble relative to that. How many people have more information than even an imperfect market-derived quantity?

replies(1): >>45107513 #
300. ◴[] No.45107176[source]
301. bravetraveler ◴[] No.45107181{5}[source]
That'll do :) edit: I assure you, there will still be a van
replies(1): >>45108671 #
302. serf ◴[] No.45107188{7}[source]
>I think you're really bending over backwards to make this company seem non viable.

Having experienced Anthropic as a customer, I have a hard time thinking that their inevitable failure (something i'd bet on) will be model/capability-based, that's how bad they suck at every other customer-facing metric.

You think Amazon is frustrating to deal with? Get into a CSR-chat-loop with an uncaring LLM followed up on by an uncaring CSR.

My minimum response time with their customer service is 14 days -- 2 weeks -- while paying 200usd a month.

An LLM could be 'The Great Kreskin' and I would still try to avoid paying for that level of abuse.

replies(2): >>45107371 #>>45111602 #
303. seydor ◴[] No.45107192[source]
Which one will hit $1T first?
304. Davidzheng ◴[] No.45107202[source]
the human brain can't run off a nuclear power plant b/c it was too hard for evolution to figure out, but we figured it out. No reason running on nuclear power plant won't give much higher intelligence.
replies(1): >>45109787 #
305. xpe ◴[] No.45107224{6}[source]
A painting on a wall is merely an inanimate object.

A company has agency; it seeks to add economic value to itself over time including changing people’s perceptions.

I don’t see how your comments have any bearing to the point I was making. What am I missing?

replies(1): >>45108419 #
306. serf ◴[] No.45107232{5}[source]
I was expecting a wedding or birth announcement from that picture framing and title.

"We would like to introduce you to the spawn of Johnny Ive and Sam Altman, we're naming him Damien Thorn."

307. AlexandrB ◴[] No.45107239[source]
The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents, so you know the monetization is going to be awful. Since the models are expensive to run it's all subscription priced and has to run in the cloud where the user has no control. The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships. Basically it's the culmination of all the worst tech trends of the last 10 years.
replies(12): >>45107334 #>>45107517 #>>45107684 #>>45107685 #>>45108349 #>>45109055 #>>45109547 #>>45109687 #>>45111383 #>>45112507 #>>45112534 #>>45114113 #
308. sjapkee ◴[] No.45107242[source]
The biggest problem is that result doesn't worth spent resources
309. simianwords ◴[] No.45107250{4}[source]
>It is all fake and made up, and the numbers are detached from the real world, but it's not like the market doesn't know that.

How? The market is the one that made the decision to invest. They are not playing musical chairs.

310. not_the_fda ◴[] No.45107268{3}[source]
"This is a technology that could replace a sizable fraction of humamkind as a labor input."

And if it does? What happens when a sizable fraction of humamkind is hungry and can't find work? It usually doesn't turn out so well for the rich.

replies(3): >>45108189 #>>45110145 #>>45112483 #
311. seydor ◴[] No.45107288[source]
and yet it's still only ~10% of google's revenue.
312. mritchie712 ◴[] No.45107311{3}[source]
CC being better than Cursor didn't make sense to me until I realized Anthropic trains[0] it's models to use it's own built-in tools[1].

0 - https://x.com/thisritchie/status/1944038132665454841

1- https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...

replies(1): >>45111844 #
313. simianwords ◴[] No.45107334{3}[source]
This is very pessimistic take. Where else do you think the innovation would come from? Take cloud for example - where did the innovation come from? It was from the top. I have no idea how you came to the conclusion that this implies monetization is going to be awful.

How do you know models are expensive to run? They have gone down in price repeatedly in the last 2 years. Why do you assume it has to run in the cloud when open source models can perform well?

> The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships

There are hundreds of millions of chatgpt users weekly. They didn't need a C suite to push the usage.

replies(4): >>45107599 #>>45107679 #>>45107713 #>>45111730 #
314. robotresearcher ◴[] No.45107341{4}[source]
We make hundreds of millions of brains a year for the cost of their parent’s food and shelter.

That’s the known minimum cost. We have a lot of room to get costs down if we can figure out how.

315. serf ◴[] No.45107343{4}[source]
>Labs can just step up the way they track signs of prompts meant for model distillation. Distillation requires a fairly large number of prompt/response tuples, and I am quite certain that all of the main labs have the capability to detect and impede that type of use if they put their backs into it.

....while degrading their service for paying customers.

This is the same problem as law-enforcement-agency forwarding threats and training LLMs to avoid user-harm -- it's great if it works as intended, but more often than not it throws a lot more prompt cancellations at actual users by mistake, refuses queries erroneously -- and just ruins user experience.

i'm not convinced any of the groups can avoid distillation without ruining customer experience.

316. SilverElfin ◴[] No.45107347[source]
The other problem is that big companies can take a loss and starve out any competition. They already make a ton of money from various monopolies. And they do not have the distraction of needing to find funding continuously. They can just keep selling these services at a loss until they’re the only ones left. That’s leaving aside the advantages they have elsewhere - like all the data only they can access for training. For example, it is unfair that Google can use YouTube data, but no one else can. How can that be fair competition? And they can also survive copyright lawsuits with their money. And so on.
317. lotsofpulp ◴[] No.45107357{3}[source]
Why would a global taxi/delivery broker not be worth billions? Their most recent 10-Q says they broker 36 million rides or deliveries per day. Even profiting $1 on each of those would result in a company worth billions.
318. jcranmer ◴[] No.45107367{3}[source]
> Barely 50 years ago computers used to cost a million dollars and were less powerful than your phone's SIM card.

Fifty years ago, we were starting to see the very beginning of workstations (not quite the personal computer of modern days), something like this: https://en.wikipedia.org/wiki/Xerox_Alto, which cost ~$100k in inflation-adjusted money.

319. nikanj ◴[] No.45107369[source]
$183B makes sense because 20 years ago something else was valued at $1.65 billion and money has decreased in value 100-fold?
replies(2): >>45110317 #>>45111151 #
320. sbarre ◴[] No.45107371{8}[source]
Maybe you don't want to share, but I'm scratching my head trying to think of something I would need to talk to Anthropic's customer service about that would be urgent and un-straightfoward enough to frustrate me to the point of using the term "abuse"..
replies(1): >>45108118 #
321. serf ◴[] No.45107372{5}[source]
I think this misunderstands the scale of these models.

And honestly I don't think a lot of these companies would turn a profit on pure utility -- the electric and water company doesn't advertise like these groups do; I think that probably means something.

replies(1): >>45108646 #
322. beAbU ◴[] No.45107397{6}[source]
The post borders on turbo encabulatoe levels of insanity. It makes zero sense.

What does the product do?

replies(1): >>45107546 #
323. serf ◴[] No.45107400{3}[source]
it's difficult for me to imagine this level of compute existing and sitting there idle somewhere; it just doesn't make sense.

So we can at least assume that whoever is deciding to move the capacity does so at some business risk elsewhere.

324. simianwords ◴[] No.45107433{5}[source]
Why is this illegal btw? I mean whats stopping an AI company from releasing a proper NSFW model? I hope it doesn't happen but I want to know what prevents them from doing it now.
replies(1): >>45108094 #
325. simianwords ◴[] No.45107453{5}[source]
Uber is profitable so why do you think it doesn't work?
replies(1): >>45112311 #
326. simianwords ◴[] No.45107481{3}[source]
SV playbook has been to make sustainable businesses. Uber makes profits, so do Google, Amazon and other big tech.

> LLMs are amazing but I'm not sure they justify the astronomical training capex, other than as a stepping stone.

They can just... stop training today and quickly recuperate the costs because inference is mostly profitable.

replies(1): >>45113853 #
327. simianwords ◴[] No.45107503{4}[source]
Its a strange way to view things.. the investors found a place to invest money from which they can make profits and they did it.

Much like any other investment. What do you think makes this more speculative than any other investment?

328. fidotron ◴[] No.45107513{7}[source]
> there is a lot of collective information going into this valuation

No, there isn't. For example, I would like to legally bet against Anthropic existing as a going concern in five years. Where can I do this? All the information against them is discarded and hidden.

replies(2): >>45111170 #>>45111178 #
329. dpe82 ◴[] No.45107517{3}[source]
In a previous generation, the enabler of all our computer tech innovation was the incredible pace of compute growth due to Moore's Law, which was also "top-down" from very well-funded companies since designing and building cutting edge chips was (and still is) very, very expensive. The hype was insane, and decisions about what chip features to build were made largely on the basis of existing vendor relationships. Those companies benefited, but so did the rest of us. History rhymes.
replies(4): >>45107619 #>>45109790 #>>45112438 #>>45113939 #
330. koakuma-chan ◴[] No.45107546{7}[source]
> What does the product do?

I think this is like ChatGPT, but it generates "inner monologue" in the background, and the "inner monologue" is then added to the context, and this "addresses" "sycophancy, attention deficits, and inconsistent prioritization"

331. naiv ◴[] No.45107561[source]
Same. We all moved to codex in the past weeks not looking back at our cancelled Max20 subscriptions.

But who knows what will be to best tool/model to use in October.

332. AlexandrB ◴[] No.45107599{4}[source]
> I have no idea how you came to the conclusion that this implies monetization is going to be awful.

Because cloud monetization was awful. It's either endless subscription pricing or ads (or both). Cloud is a terrible counter-example because it started many awful trends that strip consumer rights. For example "forever" plans that get yoinked when the vendor decides they don't like their old business model and want to charge more.

replies(3): >>45107632 #>>45110704 #>>45111380 #
333. psychoslave ◴[] No.45107617{3}[source]
Yeah, no hate for kung fu here, but maybe learning to better communicate together, act in ways that allows everyone to thrive in harmony and spread peace among all humanity might be a better thing to start incorporating, might not it?
replies(1): >>45116002 #
334. dmschulman ◴[] No.45107619{4}[source]
Eh, if this is true then IBM and Intel would still be the kings of the hill. Plenty of companies came from the bottom up out of nothing during the 90s and 2000s to build multi-billion dollar companies that are still dominate the market today. Many of those companies struggled for investment and grew over a long timeframe.

The argument is something like that is not really possible anymore given the absurd upfront investments we're seeing existing AI companies need in order to further their offerings.

replies(2): >>45107682 #>>45107904 #
335. ants_everywhere ◴[] No.45107622[source]
> What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure.

I'm curious to hear from experts how much this is true if interpreted literally. I definitely see that having hardware is a necessary condition. But is it also a sufficient condition these days? ... as in is there currently no measurable advantage to having in-house AI training and research expertise?

Not to say that OP meant it literally. It's just a good segue to a question I've been wondering about.

336. yieldcrv ◴[] No.45107631{6}[source]
and people focus way too much much on superimposed images instead of completely new digital avatars, which is what’s already taking off now
337. simianwords ◴[] No.45107632{5}[source]
Vast majority of cloud users use AWS, GCP and Azure which have metered billing. I'm not sure what you are talking about.
338. MangoCoffee ◴[] No.45107633[source]
Why doesn't Apple make their own iPhones instead of contracting them to Foxconn?
339. aripickar ◴[] No.45107650{4}[source]
Tech Companies are valued at a multiple of next 12 months revenue, not last 12 months revenue. Since anthropic grew from $1billion to $5billion in revenue in ~8 months, that means it ~10x'ed revenue y/y off of 1 billion base. If you assume even 60% of that growth is retained (low for traditional saas businesses, but who knows), then anthropic is ~10% of google in terms of revenue in mid ~2027.

Basically, 5x-ing revenue in 8 months off of a billion dollars starting revenue is insane. Growing this quickly at this scale breaks every traditional valuation metric.

(And no - this doesn't include margins or COGS).

340. llamasushi ◴[] No.45107661{5}[source]
One doesn't need to go more than 2 feet into the mire of meme coins before finding the detritus of 6000000 rug pulls. Just that these guys never get prosecuted.
replies(1): >>45108765 #
341. yahoozoo ◴[] No.45107665{5}[source]
What about inference costs?
342. ViewTrick1002 ◴[] No.45107676{6}[source]
And in Europe Bolt is winning in many markets.

Taxi apps are a commodity today.

343. HarHarVeryFunny ◴[] No.45107679{4}[source]
C-suite is pushing business adoption, and those GenAI projects of which 95% are failing.
replies(2): >>45107692 #>>45108164 #
344. dpe82 ◴[] No.45107682{5}[source]
Anthropic has existed for a grand total of 4 years.

But yes, there was a window of opportunity when it was possible to do cutting-edge work without billions of investment. That window of opportunity is now past, at least for LLMs. Many new technologies follow a similar pattern.

replies(1): >>45109680 #
345. awongh ◴[] No.45107685{3}[source]
> All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents

What I always thought was exceptional is that it turns out it wasn't the incumbents who have the obvious advantage.

Take away the fact that everyone involved is already at the top 0.00001% echelon of the space (Sam Altman and everyone involved with the creation of OpenAI), but if you had asked me 10 years ago who will have the leg up creating advanced AI I would have said all the big companies hoarding data.

Turns out just having that data wasn't a starting requirement for the generation of models we have now.

A lot of the top players in the space are not the giant companies with unlimited resources.

Of course this isn't the web or web 2.0 era where to start something huge the starting capital was comparatively tiny, but it's interesting to see that the space allows for brand new companies to come out and be competitive against Google and Meta.

346. llamasushi ◴[] No.45107690{4}[source]
Lol, tether, bitfinex are examples that came to mind. A lot of the OG crypto instutions got to where they are by "faking it till they made it" long enough to actually make it.

Does no one still remember that tether continually stalled audits FOR YEARS in the face of increasing scrutiny?

347. simianwords ◴[] No.45107692{5}[source]
The other side of it is lots of users are willingly purchasing the subscription without any need of push.
replies(2): >>45108805 #>>45111912 #
348. acdha ◴[] No.45107713{4}[source]
> Take cloud for example - where did the innovation come from? It was from the top.

Definitely not. That came years later but in the late 2000s to mid-2010s it was often engineers pushing for cloud services over the executives’ preferred in-house services because it turned a bunch of helpdesk tickets and weeks to months of delays into an AWS API call. Pretty soon CTOs were backing it because those teams shipped faster.

The consultants picked it up, yes, but they push a lot of things and usually it’s only the ones which actual users want which succeed.

replies(2): >>45108313 #>>45110983 #
349. monax ◴[] No.45107772{4}[source]
If you get a 10x speedup with an LLM it mean you are not doing anything new or interesting
replies(1): >>45112585 #
350. 3uler ◴[] No.45107904{5}[source]
Intel was king of the hill until 2018.
replies(1): >>45111724 #
351. powerapple ◴[] No.45107915[source]
Also not all compute was necessary for the final model, a large chunk of it is trial and error research. In theory, for $1B you spent training the latest model, a competitor will be able to do it after 6 months with $100M.
replies(1): >>45110291 #
352. marcosdumay ◴[] No.45107919{9}[source]
The government creates money every time it spends more than it taxes. AFAIK, the US has been doing that nonstop since the turn of the century.

That new money is different from the new money the central bank creates to push interest rates down. That later one the US has been destroying. But both do many of the same things (but not all).

replies(1): >>45118284 #
353. badpun ◴[] No.45107926{6}[source]
Art piece cannot do buybacks/dividends.
354. ethbr1 ◴[] No.45107941{5}[source]
My guess would be that it parallels other backend software revolutions.

Initially, first party proprietary solutions are in front.

Then, as the second-party ecosystem matures, they build on highest-performance proprietary solutions.

Then, as second parties monetize, they begin switching to OSS/commodity solutions to lower COGS. And with wider use, these begin to outcompete proprietary solutions on ergonomics and stability (even if not absolute performance).

While Anthropic and OpenAi are incinerating money, why not build on their platforms? As soon as they stop, scales tilt towards an apache/nginx type commoditized backend.

355. gizajob ◴[] No.45107976{5}[source]
The economics will work out when the district heating is run off the local AI/cash furnace.
356. xp84 ◴[] No.45108023{4}[source]
> " If Anthropic's internal version of Claude Code gets so good that they can recreate all of google's products quickly"

I know you aren't asserting this but rather just putting the argument out there, but to me at least it's interesting comparing a company that has vendor lock-in and monopoly or duopoly status in various markets vs one that doesn't.

I'd argue that Google's products themselves haven't been their moat for decades -- their moat is "default search engine status" in the tiny number of Browsers That Matter (Arguably just Chrome and Mobile Safari), being entrenched as the main display ad network, duopoly status as an OS vendor (Android), and monopoly status on OS vendor for low-end education laptops (ChromeOS). If somehow those were all suddenly eliminated, I think Google would be orders of magnitude less valuable.

357. m101 ◴[] No.45108069{6}[source]
Except it's like second tier semi manufacturer spending 10x less on the same fab in one years time. Here it might make sense to wait a bit. There will be customers, especially considering the diminishing returns these models seem to have come across. If performance was improving I'd agree with you, but it's not.
358. xp84 ◴[] No.45108092{5}[source]
Don't those cost like $400,000 a piece to outfit, though? I mean this with tremendous respect because I think they're the only ones doing it "right," I feel like Waymo is kind of 'bruteforcing' autonomous driving using money. There's an inherent limit to the impact of a technology (and thus its long-term value) based on its cost, and even stipulating that Waymo has solved it in general, I think a valuation should be contingent on a roadmap which shows how it's going to scale out -- this seems like an as-yet unsolved problem until someone shows how to combine the reliability of the tech-heavy Waymo system with the price tag of a Tesla.
replies(2): >>45109983 #>>45109984 #
359. baq ◴[] No.45108094{6}[source]
in some jurisdictions generating a swastika or a hammer and sickle is illegal.

that said, I'm sure you can imagine that the really illegal, truly, positively sickening and immoral stuff is children-adjacent and you can be 100% sure there are sociopaths doing training runs for the broken people who'll buy the weights.

replies(1): >>45108137 #
360. babelfish ◴[] No.45108118{9}[source]
Particularly since they seem to be complaining about service as a consumer, rather than an enterprise...
replies(1): >>45110256 #
361. simianwords ◴[] No.45108137{7}[source]
Is it illegal to use mspaint to generate similar vile things?
replies(2): >>45108643 #>>45109834 #
362. xp84 ◴[] No.45108160[source]
To me, 'public pension monies' (more or less, retirement savings from citizens who happen to work for the government) and 'public funds' don't seem like the exact same thing. To me, public funds implies money from the government budget or sovereign wealth funds.

Although I admit that the government may be on the hook to replenish any spectacular failures in such a pension plan so in that way, it is somewhat fair -- though I doubt any one investment is weighted so heavily in any pension fund as to precipitate such an event.

replies(1): >>45109357 #
363. og_kalu ◴[] No.45108164{5}[source]
That same report said a lot people are just using personal accounts for work though.
364. dweekly ◴[] No.45108189{4}[source]
I don't think most folks think very hard about where most wealth comes from but imagine it just sort of exists in a fixed quantity or is pulled from the ground like coal or diamonds - there's a fixed amount of it, and if there are very rich people, it must be because they took the coal/diamonds away from other people who need it. This leads to catchy slogans.

But it's pretty obvious wealth can be created and destroyed. The creation of wealth comes from trade, which generally comes from a vibrant middle class which not only earns a fair bit but also spends it. Wars and revolutions are effective at destroying wealth and (sometimes) equitably redistributing what's left.

Both the modern left and modern right seem to have arrived at a consensus that trade frictions are a good way to generate (or at least preserve) wealth, while the history of economics indicates quite the contrary. This was recently best pilloried by a comic that showed a town under siege and the besieging army commenting that this was likely to make the city residents wealthy by encouraging self-reliance.

We need abundant education and broad prosperity for stability - even (and maybe especially) for the ultra wealthy. Most things we enjoy require absolute and not relative wealth. Would you rather be the richest person in a poor country or the poorest of the upper class in a developed economy?

replies(2): >>45109669 #>>45111799 #
365. jpalomaki ◴[] No.45108204{4}[source]
"Google’s advertising revenue in 2024 was about $264.6 billion"

Somebody above said that Anthropic might reach $9 billion ARR by the end of this year.

366. ipython ◴[] No.45108226{3}[source]
Not to mention water for cooling. Large data centers can use 1 million+ gallons per day.
replies(1): >>45110537 #
367. m101 ◴[] No.45108243{3}[source]
Why is this downvoted when it's spot on.. if reality < expectations so much money is sitting on extremely quickly depreciating assets. It will be bad. Risk is to the downside.
replies(1): >>45108787 #
368. jpalomaki ◴[] No.45108273[source]
Valuations are high, but it's also the first time in history when developers are spending $200 per month on tools and feeling they are getting great value out of them.

I think one key question is can Anthropic replicate this on some other segment. Like with people working with financials.

369. m101 ◴[] No.45108286[source]
A joke for finance types I was told a while back:

"what do you call a rouge trader that makes money?"

"Managing director"

If someone makes money on time, everything is forgiven. Money blinds us.

replies(1): >>45110452 #
370. belter ◴[] No.45108298[source]
The AI story is over.

One more unimpressive release of ChatGPT or Claude, another 2 Billion spent by Zuckerberg on subpar AI offers, and the final realization by CNBC that all of AI right now...Is just code generators, will do it.

You will have ghost data centers in excess like you have ghost cities in China.

371. simianwords ◴[] No.45108313{5}[source]
Sure that’s the same way GPT was invented in Google.
372. tedivm ◴[] No.45108349{3}[source]
This is only if you ignore the growing open source models. I'm running Qwen3-30B at home and it works great for most of the use cases I have. I think we're going to find that the optimizations coming from companies out of China are going to continue making local LLMs easier for folks to run.
replies(1): >>45116416 #
373. Workaccount2 ◴[] No.45108419{7}[source]
I'm not the one who decided that a painting appreciates with time and trends. But they do it pretty reliably and people keep paying the dollars that we all use for everything else for them. It's just another generally appreciating asset regardless if it's value comes from looks or tax structuring utility.
replies(1): >>45120431 #
374. m101 ◴[] No.45108437[source]
This round started at $5bn target and it ends at $13bn. When this sort of thing happens it's normally because the company wants to 1) hit the "hot" market, and 2) has uncertainty about their ability to raise revenues at higher valuations in the future.

Whatever it is, the signal it's sending of Anthropic insiders is negative for AI investors.

Other comments having read a few hundred comments here:

- there is so much confusion, uncertainty, and fanciful thinking that it reminds me of the other bubbles that existed when people had to stretch their imaginations to justify valuations

- there is increasing spend on training models, and decreasing improvements in new models. This does not bode well

- wealth is an extremely difficult thing to define. It's defined vaguely through things like cooperation and trade. Ultimately these llms actually do need to create "wealth" to justify the massive investments made. If they don't do this fast this house of cards is going to fall, fast.

- having worked in finance and spoken to finance types for a long time: they are not geniuses. They are far from it. Most people went into finance because of an interest in money. Just because these people have $13bn of other people's money at their disposal doesn't mean they are any smarter than people orders of magnitude poorer. Don't assume they know what they are doing.

replies(2): >>45109595 #>>45110493 #
375. Avshalom ◴[] No.45108456{5}[source]
if you're referring to https://youtu.be/GcqQ1ebBqkc?t=1027 he doesn't actually say that each model has been profitable.

He says "You paid $100 million and then it made $200 million of revenue. There's some cost to inference with the model, but let's just assume in this cartoonish cartoon example that even if you add those two up, you're kind of in a good state. So, if every model was a company, the model is actually, in this example is actually profitable. What's going on is that at the same time"

notice those are hypothetical numbers and he just asks you to assume that inference is (sufficiently) profitable.

He doesn't actually say they made money by the EoL of some model.

376. twostorytower ◴[] No.45108472{4}[source]
Isn't that just the right thing to do, statistically? Vegas has been operating profitably this way for decades.
replies(2): >>45109201 #>>45109647 #
377. itronitron ◴[] No.45108477[source]
Hmm, I wonder how much bitcoin someone could mine with that amount of compute.
replies(1): >>45111419 #
378. uncircle ◴[] No.45108491{4}[source]
https://xkcd.com/605/
379. jayd16 ◴[] No.45108582{6}[source]
You're saying that's needed for inference?
380. Majromax ◴[] No.45108643{8}[source]
Not in the United States, but it is illegal in some jurisdictions.

Additionally, the entire "payment processors leaning on Steam" thing shows that it might be very difficult to monetize a model that's known for generating extremely controversial content. Without monetization, it would be hard for any company to support the training (and potential release) of an unshackled enterprise-grade model.

381. jayd16 ◴[] No.45108646{6}[source]
What's the scale for inference? Is it truly that immense? Can you ballpark what you think would make such a thing impossible?

> the electric and water company doesn't advertise like these groups do

I'm trying to understand what you mean here. In the US these utilities usually operate in a monopoly so there's no point in advertising. Cell service has plenty of advertising though.

382. worldsayshi ◴[] No.45108671{6}[source]
Solar powered e-van? I found this now: https://soleva.org
383. fancyfredbot ◴[] No.45108682[source]
So many negative comments here! The fact that one of the top players in a new market segment with significant growth potential can raise $13B at a 20x revenue valuation is not the bubble indicator you think it is.

It's at least possible that the investment pays off. These investors almost certainly aren't insane or stupid.

We may still be in a bubble, but before you declare money doesn't mean anything any more and start buying put options I'd probably look for more compelling evidence than this.

replies(4): >>45109053 #>>45109073 #>>45109682 #>>45110475 #
384. HaZeust ◴[] No.45108765{6}[source]
Yeah, but they're not playing with institutional money. They're not messing with people that have world-leaders on speed dial. Crypto gets away with what it does because when you enter an explicitly laissez-faire side of life, expect people to act laissez-faire. The rest is fraud/laundering/illicit activity tracking, which is why KYC requirements were passed right on schedule.
385. dcchambers ◴[] No.45108787{4}[source]
Being critical of AI companies on Hacker News is pretty tough these days. Either the majority of people are all-in and want to bury their heads in the sand to the real dangers and risks (economical, psychological, etc) or there's just lots of astroturfing going on.
replies(1): >>45111440 #
386. HarHarVeryFunny ◴[] No.45108805{6}[source]
Sure - there are use cases for LLMs that work, and use cases that don't.

I think those actually using "AI" have a lot better idea of which are which than the C-suite folk.

387. lifty ◴[] No.45108814{4}[source]
Someone mentioned their projected ARR for 2025 is 9b. Which makes sense intuitively looking at how much I spent with them this year. So the valuation looks a bit more sane with those numbers.
388. dgrcode ◴[] No.45108858{4}[source]
How old was alphabet in 2024? And anthropic?

How much was google revenue in 2003? It was 1.5 billions (2.6 in today's USD)

Not saying the price is justified, but the comparison is not very fair.

389. Nextgrid ◴[] No.45108937{4}[source]
Practically speaking I think everyone involved would've had a good incentive to brush it off behind closed doors and not rock the boat. Crypto is entirely based on vibes (there are very few - if any - legitimate applications) and rocking the boat would cause losses across the entire industry.
390. Nextgrid ◴[] No.45108957{4}[source]
Fraud is only called fraud if you get caught and defraud the wrong people. Corporation-on-consumer fraud is generally OK and a lot of businesses we consider "legitimate" do it as standard practice. Fraud against investors and "the rich" can still be papered over and forgotten if everyone ends up richer in the end. SBF just got unlucky.
391. criemen ◴[] No.45109023[source]
I'm not so confident in that yet. If you look at the inference prices Anthropic charges (on the API) it's not a race to the bottom - they are asking for what I feel is a lot of money - yet people keep paying that.
replies(2): >>45109757 #>>45116524 #
392. mateus1 ◴[] No.45109053[source]
> These investors almost certainly aren't insane or stupid.

I'm sure this exact sentence was said before every bubble burst.

replies(2): >>45109559 #>>45109616 #
393. atleastoptimal ◴[] No.45109055{3}[source]
Nevertheless, prices for LLM at any given level of performance have gone down precipitously over the past few years. Regardless of how bad it seems the decisions being made are, the decision making process both is making an extreme amount of money for those in the AI companies, and providing extremely cheap and high quality intelligence for those using their offerings.
replies(1): >>45109158 #
394. kittikitti ◴[] No.45109073[source]
These are the same investors who got scammed by SBF who didn't even have a basic spreadsheet that explained the finances.
replies(2): >>45109338 #>>45109722 #
395. atleastoptimal ◴[] No.45109091[source]
HN in 2046

> Headline: OpenAI raises 400 Trillion, proclaims dominion over the delta quadrant

> Top comment: This just proves that it's a bubble. No AI company has been profitable, we're in the era of diminishing returns. I don't know one real use case for AI

It's hilarious how routinely bearish this site is about AI. I guess it makes sense given how much AI devalues siloed tech expertise.

replies(1): >>45109546 #
396. pimlottc ◴[] No.45109158{4}[source]
Remember when you could get an Uber ride all the way across town for $5? It is way too early to know what prices for these services will actually cost.
replies(1): >>45113128 #
397. FergusArgyll ◴[] No.45109201{5}[source]
The context was double or nothing the entire human population of the universe.
398. luisfmh ◴[] No.45109224{5}[source]
Hard disagree on this. The gap between the levels of statistical significance you get in economics vs physics is massive. They're not at the same levels of inevitability. The predictive power of the laws of physics vs the laws of economics is vastly different.
399. FergusArgyll ◴[] No.45109306[source]
IIRC Matt Levine says: when there is a tech bubble, the correct trade is not to short the nasdaq, it's to start a company and ask Masayoshi Son for an investment
400. Wojtkie ◴[] No.45109338{3}[source]
... or really any SoftBank Vision Fund backed startup
401. j7ake ◴[] No.45109357{3}[source]
Government workers are funded by public money, so public pension monies are funded by public money ultimately.
402. FergusArgyll ◴[] No.45109444{3}[source]
> they manage to attract top talent

I do think this is important. Many of the best researchers are also religious AGIists and Anthropic is the most welcoming to them. This is a field where the competence of researchers really matters.

403. VirgilShelton ◴[] No.45109451[source]
My contrarian take on the astronomical costs need to scale LLM infrastructure is that since it does cost so much, innovation at the grid and power plant / renewables will also see massive gains and ultimately save our planet.
404. andrewgleave ◴[] No.45109495[source]
> “There's kind of like two different ways you could describe what's happening in the model business right now. So, let's say in 2023, you train a model that costs 100 million dollars. > > And then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs a billion dollars. And then in 2025, you get $2 billion of revenue from that $1 billion, and you spend $10 billion to train the model. > > So, if you look in a conventional way at the profit and loss of the company, you've lost $100 million the first year, you've lost $800 million the second year, and you've lost $8 billion in the third year. So, it looks like it's getting worse and worse. If you consider each model to be a company, the model that was trained in 2023 was profitable.” > ... > > “So, if every model was a company, the model is actually, in this example, is actually profitable. What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's like much more expensive and requires much more upfront R&D investment. And so, the way that it's going to shake out is this will keep going up until the numbers go very large, the models can't get larger, and then it will be a large, very profitable business, or at some point, the models will stop getting better. > > The march to AGI will be halted for some reason, and then perhaps it will be some overhang, so there will be a one-time, oh man, we spent a lot of money and we didn't get anything for it, and then the business returns to whatever scale it was at.” > ... > > “The only relevant questions are, at how large a scale do we reach equilibrium, and is there ever an overshoot?”

From Dario’s interview on Cheeky Pint: https://podcasts.apple.com/gb/podcast/cheeky-pint/id18210553...

405. greenchair ◴[] No.45109546[source]
comment history appears to be an AI shill account.
replies(1): >>45113116 #
406. hintymad ◴[] No.45109547{3}[source]
> The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies

Wouldn't it be the same for the hardware companies? Not everyone could build CPUs as Intel/Motorola/IBM did, not everyone could build mainframes like IBM did, and not everyone could build smart phones like Apple or Samsung did. I'd assume it boils down the value of the LLMs instead of who has the moat. Of course, personally I really wish everyone can participate in the innovation like the internet era, like training and serving large models on a laptop. I guess that day will come, like PC over mainframes, but just not now.

407. fancyfredbot ◴[] No.45109559{3}[source]
That sounds very cynical and knowing which is obviously great, but not super interesting. Do you think the investors are insane or stupid? Do you think this is a bubble and that it's about to burst? I'm interested to know why.
408. ◴[] No.45109568[source]
409. utyop22 ◴[] No.45109595[source]
Lol yeah I generally read most comments on here with one eye closed. This is one of the good ones though.
replies(1): >>45111314 #
410. sothatsit ◴[] No.45109616{3}[source]
Most investors I've heard talk about the AI bubble have mentioned exactly that they know it is a bubble. They are just playing the game, because there is money to be made before that bubble bursts. And additionally, there is real value in these companies.

I would assume the majority of investors in AI are playing a game of estimating how much more these AI valuations can run before crashing, and whether that crash will matter in the long-run if the growth of these companies lives up to their estimates.

411. roncesvalles ◴[] No.45109647{5}[source]
It's not the same due to the Law of Large Numbers. The risk involved in many small 51% bets is very different from the risk in a single all-or-nothing 51% bet.
replies(1): >>45113119 #
412. utyop22 ◴[] No.45109669{5}[source]
There's a subtle and nuanced difference between real wealth and financial wealth that most people never touch on.
413. falcor84 ◴[] No.45109680{6}[source]
What about deepseek r1? That was earlier this year - how do you know that there won't be more "deepseek moments" in the coming years?
414. utyop22 ◴[] No.45109682[source]
Remind me what happened re. SoftBank + WeWork.
replies(1): >>45110853 #
415. crawshaw ◴[] No.45109687{3}[source]
> All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents

The model leaders here are OpenAI and Anthropic, two new companies. In the programming space, the next leaders are Qwen and DeepSeek. The one incumbent is Google who trails all four for my workloads.

In the DevTools space, a new startup, Cursor, has muscled in on Microsoft's space.

This is all capital heavy, yes, because models are capital heavy to build. But the Innovator's Dilemma persists. Startups lead the way.

replies(2): >>45109839 #>>45109880 #
416. jjmarr ◴[] No.45109697{5}[source]
If your marginal utility of money increases with more, it's a rational decision to go to a casino and gamble.
replies(1): >>45110619 #
417. fancyfredbot ◴[] No.45109722{3}[source]
I see two of nineteen investors were also invested in FTX (Insight and Ontario teachers). With hindsight that's a bad investment although they probably recovered their money here so probably not their worst. Does this actually tell you they are stupid or insane?

I think that's one possible interpretation but another is that these funds choose to allocate a controlled portion of their capital toward high risk investments with the expectation that many will fail but some will pay off. It's far from clear that they are crazy or stupid.

replies(1): >>45110400 #
418. jstoppa ◴[] No.45109729{3}[source]
I agree, just another cycle
419. utyop22 ◴[] No.45109730{3}[source]
But if you're an investor who doesn't care about the long-term value of the firm, all you care about is maximizing your return on future sales of the shares of stock.

Doing proper intrinsic valuation with technology firms is nigh-on impossible to do.

420. pnt12 ◴[] No.45109732[source]
I mean, this sounds like survivor bias in action?

Google also bought Motorola for 12 billion and Microsoft bought Nokia for 7 billion. Those weren't success cases.

Or more similarly, WeWork got 12B from investor and isn't doing well (hell, bankrupt, according to Wikipedia).

replies(2): >>45109921 #>>45111240 #
421. utyop22 ◴[] No.45109755{3}[source]
Yeah these things take time to play out. So I always just say, the large populous of people will finally realise fantasy and reality have to converge at some point.
422. worldsayshi ◴[] No.45109757{3}[source]
Yeah, a collapse should only mean that training larger models become non viable right? Selling inference alone should still deliver profit.
423. Barbing ◴[] No.45109776{7}[source]
Thoughts on Ed Zitron’s pessimism?

“There Is No AI Revolution” - Feb ‘25:

https://www.wheresyoured.at/wheres-the-money/

replies(1): >>45125350 #
424. axus ◴[] No.45109785{4}[source]
The electricians in data center country report they are earning a lot of money.
425. kaashif ◴[] No.45109787{3}[source]
But if we could drink a bottle of oil and become 10x smarter for 1 hour, it would be really cool. There just wasn't any use for that in the savannah, or indeed many bottles of oil.
replies(1): >>45109960 #
426. JohnMakin ◴[] No.45109790{4}[source]
Should probably change this to "was appearance of incredible pace of compute growth due to Moore's Law," because even my basic CS classes from 15 years ago were teaching that it was drastically slowing down, and isn't really a "law" more than an observational trend that lasted a few decades. There are limits to how small you can make transistors and we're not too far from it, at least not what would continue to yield the results of that curve.
replies(1): >>45110936 #
427. ◴[] No.45109795[source]
428. tick_tock_tick ◴[] No.45109826{5}[source]
It's going to be really weird when huge swaths of the internet are illegal to visit outside the USA because you keep running into that kind of AI generate "content".
429. utyop22 ◴[] No.45109828{4}[source]
You're comparing the value of equity to firm earnings? Lol. I don't really bother calling out most financial stuff on here since I can't be bothered but come on.

Its not internally consistent, at all.

replies(1): >>45116228 #
430. tick_tock_tick ◴[] No.45109834{8}[source]
Most of Europe doesn't really have free speech, frankly most of the world doesn't. Privileges like making mspaint drawings of nearly whatever you want is pretty uniquely American.
431. nightski ◴[] No.45109839{4}[source]
At what point is OpenAI not considered new? It's a few months from being a decade old with 3,000 employees and $60B in funding.
replies(1): >>45109895 #
432. lexandstuff ◴[] No.45109880{4}[source]
And all of those companies except for Google are entirely dependant on NVIDIA who are the real winners here.
433. tick_tock_tick ◴[] No.45109892{5}[source]
You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people. You think anyone is going to make money on that vs $20 a month to anthropic?
replies(3): >>45112200 #>>45112210 #>>45112761 #
434. fshr ◴[] No.45109895{5}[source]
Well, compare them to Microsoft: 50 years old with 228,000 employees and $282 billion in revenue.
435. tick_tock_tick ◴[] No.45109921{3}[source]
> Google also bought Motorola for 12 billion and Microsoft bought Nokia for 7 billion. Those weren't success cases.

A lot of that was patent acquisition rather than trying to run those businesses so it's hard to say a success or not.

436. jjmarr ◴[] No.45109983{6}[source]
Historically speaking there was an 80 year period in which transporting mined, natural, lake ice from the US Northeast/Norway around the world was economically competitive with ice machines depending on local market conditions.

Machine ice became competitive in India and Australia in the 1850s, but it took until the start of World War 1 (1914) for artificial ice production to surpass natural in America. And the industry only disappeared when every household could buy a refrigerator.

Self-driving doesn't have to scale globally to be economically viable as a technology. It could already be viable at $400k in HCOL areas with perfect weather (i.e. California, Austin, and other places they operate).

replies(1): >>45110322 #
437. Zigurd ◴[] No.45109984{6}[source]
That's like asking if it's better to launch on Falcon 9, or wait until Starship actually hits $100 a kilogram to orbit.
438. tonymet ◴[] No.45110020[source]
i guess emissions, climate concerns , economics are all just out the window here?

My feeble uncle isn't allowed to buy a single lightbulb in his state yet , but burning terawatts for useless porn generators is where we are investing our engineering efforts.

439. FergusArgyll ◴[] No.45110122[source]
Can you give us a date by which point you are > 90% confident in that?
replies(1): >>45112406 #
440. harmmonica ◴[] No.45110145{4}[source]
Big question is whether it replaces and then doesn't create new opportunity to make up for those casualties. I'm not sold on this, but there's this part of me that actually believes LLM's or perhaps AI more broadly will enable vast numbers of people to do things that were formerly impossible for them to do because the cost was too great, or the thought of doing it too complex. Now those same things are not only accessible, but easy to access. I made a comment earlier today in the thread about Google's antitrust "win" where things I couldn't formerly have done without sizable and costly third-party professional help are now possible for near-zero cost and near-zero time. It really can radically empower folks. Not sure that's going to make up for all the job loss, but there is the possibility of real empowerment.
replies(1): >>45112150 #
441. farceSpherule ◴[] No.45110167[source]
Don't worry... It will crash down soon enough just like the Internet did back in the 90's after similar, insane investments in infrastructure.
442. UltraSane ◴[] No.45110230{6}[source]
Yep they are tapping directly into main pipelines.
443. zmmmmm ◴[] No.45110244{3}[source]
It's a slightly different context, but Apple probably would have gone out of business if Microsoft hadn't needed them so badly to exist due to antitrust. Hard to imagine how different the world would have been now if that had happened.
444. ◴[] No.45110256{10}[source]
445. xenobeb ◴[] No.45110285{4}[source]
The problem is the video models are only impressive in news stories about the video models. When you actually try to use them you can see how the marketing is playing to people's imagination because they are such a massive disappointment.
replies(1): >>45110422 #
446. zmmmmm ◴[] No.45110286[source]
Youtube is fascinating to me because it never made any sense. At the time they were starting bandwidth was expensive. How the hell did they pay the bills for that? And then every single rational piece of logic said they would be sued to oblivion due to copyright violations. Logically, Youtube should have been impossible, but here it is.

I often think about that when trying to evaluate forward looking tech. Even though 99% of the time logic like that proves to be correct, it's also true that most of the time the winners in a race did that exactly because they defined some piece of the standard framework of logic that everybody else played by. Uber is similar - they shouldn't exist, they basically broke the law in most countries they moved into, brazenly violated all kinds of barriers that kept taxi industry completely entrenched for decades. But now they are dominating in most of these countries.

replies(1): >>45113709 #
447. SchemaLoad ◴[] No.45110291{3}[source]
Not only are the actual models rapidly devaluing, the hardware is too. Spend $1B on GPUs and next year there's a much better model out that's massively devalued your existing datacenter. These companies are building mountains of quicksand that they have to constantly pour more cash on else they be reduced to having no advantage rapidly.
replies(2): >>45113911 #>>45120332 #
448. SchemaLoad ◴[] No.45110297{3}[source]
I feel like it's pretty settled that they are a little bit useful, as a faster search engine, or being able to automatically sort my emails. But the value is nowhere near justifying the investment.
449. xenobeb ◴[] No.45110317{3}[source]
You are just making up nonsense.
450. Zigurd ◴[] No.45110322{7}[source]
One of the most interesting statistics about Waymo is how few of them there are. The only service area with what you could call a large number of vehicles is the Bay Area. The news reports I've seen about it say under 1000 there and fewer than 3000 nationally. Uber's CEO was quoted as saying that a Waymo completes more rides than 99% of Uber drivers. It's a pity he didn't make a comparison against the median Uber driver. But it's plausible that a Waymo could replace 10 Uber drivers or more. That ratio flows through to revenue.
451. utyop22 ◴[] No.45110400{4}[source]
They recovered their money but what about the opportunity cost? Its actually an economic loss. In retrospect given the risk it was a pretty terrible investment.
replies(1): >>45114647 #
452. jedberg ◴[] No.45110403{3}[source]
> what are people who make these investments even betting on?

They they achieve AGI or a close approximation, and end up wealthier than god.

That's basically the bet here. Invest in OpenAI and Anthropic, and hope one of them reached near AGI.

replies(1): >>45111747 #
453. xnx ◴[] No.45110422{5}[source]
Not my experience. Have you used Veo 3?
454. utyop22 ◴[] No.45110430{4}[source]
Yep people skip over the history re YT - the battles fought re. copyright and broadcasters and so on.

Sure it was overcome, but not because of YT or Google, but because of external forces causing those people fighting it to converge on hosting their content on the platform.

455. timack ◴[] No.45110450[source]
FTX's stake in Anthropic was just under 8% so ~$14B.

(if it hadn't been liquidated)

456. willhslade ◴[] No.45110452{3}[source]
Rogue
457. slashdave ◴[] No.45110475[source]
> can raise $13B at a 20x revenue valuation is not the bubble indicator you think it is.

What a minute. Isn't this the very definition of a bubble?

replies(2): >>45111345 #>>45112551 #
458. masterjack ◴[] No.45110493[source]
I may agree if it was a 20% dilution round, but not if they are increasing from 3% to 7% dilution. Being so massively oversubscribed is a bullish sign, bad companies would be struggling to fill out their round.
replies(1): >>45111822 #
459. xnx ◴[] No.45110494{4}[source]
> The technology to build a human brain would cost billions in today’s dollars

I'm reminded of how insanely complex the human brain is: ~100 trillion connections. The Nvidia H100 has just 0.08 trillion transistors.

460. ◴[] No.45110501[source]
461. slashdave ◴[] No.45110507[source]
> Only searching for reasons why you are right is a fishing expedition.

Not to be mean, but aren't you being a little hypercritical here, bringing up your bespoke example of YouTube?

replies(2): >>45111064 #>>45111131 #
462. xnx ◴[] No.45110537{4}[source]
1 million gallons is approximately 0.5 seconds of flow of the Columbia river.
replies(1): >>45111427 #
463. protocolture ◴[] No.45110545[source]
>The compute moat is getting absolutely insane.

Is it?

Seems like theres a tiny performance gain between "This runs fine on my laptop" and "This required a 10B dollar data centre"

I dont see any moat, just crazy investment hoping to crack the next thing and moat that.

464. 9cb14c1ec0 ◴[] No.45110567{5}[source]
That can only be true if someone else is subsidizing Anthropic's compute. The calculation is simple: Annualized depreciation costs on the AI buildout (hundreds of billions, possibly a trillion invested) are more that the combined total annualized revenue of the inference industry. A more realistic computation of expenses would show the each model line very deeply in the red.
465. mothballed ◴[] No.45110619{6}[source]
The casino is an extremely rational savings model if you expected to constantly be robbed and want to convert small (and thus not worth robbing) income streams into occasionally large sums of money to be spent rapidly. I.e. say you are a north korean worker in China/Russia and you occasionally get small change to spend on cigarettes, you could gamble it every 'paycheck' and eventually buy a phone to escape with the winnings.

Filipinos have a more predictable low-loss version of this call Paluwagan.

replies(1): >>45112449 #
466. sixdimensional ◴[] No.45110669[source]
I'm not sure quantum computing is the solution, but it strikes me that a completely new compute paradigm like quantum computing is probably what is necessary - which is orders of magnitude more efficient and powerful than today's binary compute.
467. htrp ◴[] No.45110695[source]
>Along with ICONIQ, the round was co-led by Fidelity Management & Research Company and Lightspeed Venture Partners.

Iconiq (Mark Zuckerberg's family office) was one of the lead investors in the round

468. throwaway98797 ◴[] No.45110704{5}[source]
lots of start ups were built on aws

i’d rather have a subscription than no service at all

oh, and one can always just not buy something if it’s not valuable enough

469. htrp ◴[] No.45110728[source]
Yes... tons
470. cjbgkagh ◴[] No.45110824[source]
That’s like being upset that you can’t dig your own suez canal.

So long as there is competition it’ll be available at marginal cost. And there is plenty of innovation that can be done on the edges, and not all of machine learning is LLMs.

replies(1): >>45112520 #
471. jryle70 ◴[] No.45110853{3}[source]
Why WeWork and not Alibaba?

Answer: It's easy to pick and choose to prove one's point.

Softbank has been doing well lately by the way:

https://www.ebc.com/forex/softbank-stock-price-jumps-13-afte...

replies(1): >>45113429 #
472. noosphr ◴[] No.45110882[source]
My hope is that this hype cycle overbuilds nuclear power capacity so much that we end up using it to sequester carbon dioxide from the atmosphere once the bubble pops and electricity prices become negative for most of the day.

In the medium term China has so much spare capacity that they maybe be the only game in town for highend models, while the US will be trying to fix a grid with 50 years of deferred maintenance.

473. noosphr ◴[] No.45110936{5}[source]
The corollary to Moores law, that computers get twice as fast every 18 months, died by 2010. People who didn't live through the 80s, 90s and early 00s, where you'd get a computer ten times as fast every 5 years, can't imagine what it was like back then.

Today the only way to scale compute is to throw more power at it or settle for the 5% per year real single core performance improvement.

474. HotHotLava ◴[] No.45110983{5}[source]
I'm pretty sure OP wasn't talking about the management hierarchy, but "from the top" in the sense that it was big established companies inventing the cloud and innovating and pushing in the space, not small startups.
replies(2): >>45111191 #>>45114565 #
475. xpe ◴[] No.45111064{3}[source]
I don’t interpret the above as mean-spirited comment, but it does miss the point of the example I gave; namely, people second-guessing a market (or information heavily influenced by markets, like a new funding round) tend to lose. (Of course there are examples in the other direction, but they are less common and do not deserve equal emphasis.)

In general, a market synthesizes more information than any one individual, and when they operate well it is unlikely for an individual is going to beat them.

This is a well known general pattern, so if someone wants to argue in the other direction, they need to be ready to offer very strong evidence and reasoning why the market is wrong — and even when they do, they’re still probably going to be wrong.

476. xpe ◴[] No.45111131{3}[source]
I think you mean hypocritical.

To answer: no, and even if it was a “yes” it wouldn’t affect the argument I was making. I’ll explain.

I was wondering how long it would take for this kind of meta-critique would pop up. Meta critiques are interesting: some people use them as zingers, hoping to dismantle someone else’s entire position. But they almost never accomplish that because they are at a different level of argument: they aren’t engaging with the argument itself.

Meta-critiques are more like an argument against the person crafting the argument. In this sense, they function not unlike ad hominem attacks while sneakily remaining fair game.

Lastly, even if I was a hypocrite, it wouldn’t necessarily mean that I was wrong — it would simply make me inconsistent in the application of a principle.

replies(1): >>45112512 #
477. arduanika ◴[] No.45111143{5}[source]
Yes. Among other things, the Anthropic stake was sold, and repayments have begun. The full process will probably drag out through next year at least.

On the lawyer fees...do you expect someone to do it for free? Would you rather hire someone without the expertise? Do you think the crimes are the estate lawyers' fault?

Process costs money. You can sneer at the professionals that work hard to keep a lawful society in order, if you like. SBF sure did. Look where that got him.

478. xpe ◴[] No.45111151{3}[source]
You seem to think that one selected analogy proves that there’s no other explanation by which something is sensible?

Example: I could also make up dozens of reasons why something permitted by the laws of physics seems ridiculous.

479. 999900000999 ◴[] No.45111154{3}[source]
I'm going to need more than 30k to achieve a profit.

I'm basically hiring a part time front end junior assistant to fill in some gaps.

I'm hiring out of a cheaper county where 30k can actually do something. The idea is the first 30 gets an MVP done.

I don't think I could finish anything worth selling for that.

480. xpe ◴[] No.45111170{8}[source]
You misunderstand what I mean by “collective”. If you are charitable, the meaning is easy to see. “Collective” as I used it does not mean that everything or every person gets factored in.
481. xpe ◴[] No.45111178{8}[source]
Rather than getting sidetracked, I will repeat the central point and question to you again:

> How many people have more information than even an imperfect market-derived quantity?

I’ll restate the point because I don’t think you’re understanding what I mean.

Do you think this funding round was irrational from the point of view of the investors? If so, how can you make such a claim? Do you have information they do not?

It is possible you have some bit of knowledge they don’t, but on balance it is unlikely that you are operating from a position of having more relevant information.

482. pandemicsyn ◴[] No.45111191{6}[source]
Sure Amazon was a big established co at the dawn of the cloud, and a little bit of an unexpected dark horse. None of the managed hosting providers saw Amazon coming. Also ran's like Rackspace and the like where also pretty established by that point.

But there was also cool stuff happening at smaller places like Joyent, Heroku, Slicehost, Linode, Backblaze, iron.io, etc.

483. ddxv ◴[] No.45111199[source]
Good! I'm looking forward to keep using Claude free tier and giving open source projects time to catch up.
484. xpe ◴[] No.45111240{3}[source]
I see what you are getting at, but it is important to understand the context for my example and the argument I’m making.

I’ve explained various points at length in other comments: (i) why I selected this example (simply to show that folk wisdom or common sense is less reliable than market-driven valuations) (ii) how a funding round is influenced by markets even though it isn’t directly driven by a classic full market mechanism.

Something I haven’t said yet would be a question: how can an outsider rigorously assess the error in a funding round or acquisition? To phrase the question a different way: what price or valuation would an oracle assign based on known information?

One might call this ex-ante rationality. Framing it this way helps remove hindsight bias; for example, a subsequent failure doesn’t necessarily mean it was mispriced (sp?) at the time.

485. month13 ◴[] No.45111304{3}[source]
For those curious, I found this to be a very entertaining retelling of events from Nortel's persepective: https://www.youtube.com/watch?v=I6xwMIUPHss
486. xpe ◴[] No.45111314{3}[source]
It helps me to know that there are other people noticing this. Like Fox News, a lot of comments here probably make us dumber.
replies(1): >>45113608 #
487. lz400 ◴[] No.45111336[source]
That’s probably what the companies spending the money think, that they’re building a huge moat. There’s an alternative view. If there’s a bubble and all these companies are spending these huge sums on something that ends up not returning that much on that investment, and the models plateau and eventually smaller, cheaper, self-runnable open source versions get 90% of the way there, what’s going to happen to that moat? And the companies that over spent so much?

This article is a good example of the bear case https://www.honest-broker.com/p/is-the-bubble-bursting

488. mountainriver ◴[] No.45111335{3}[source]
Off software codegen alone it is beyond useful
489. xpe ◴[] No.45111345{3}[source]
Do you think some arbitrary multiple defines a bubble?
490. Daz1 ◴[] No.45111380{5}[source]
>Because cloud monetization was awful

Citation needed

491. chermi ◴[] No.45111383{3}[source]
What's the counterfactual? Where would the world be today? Certainly the present is not an optimal allocation of resources, uncertainty and hysteris make it impossible. But where do you think we'd be instead? Are you assuming all of those dollars would be going to research otherwise? They wouldn't; if not for hype "ai" LLMs, research funding would be at 2017+/- 25% levels. Also think of how many researchers are funded and PhDs are trained because of this awful LLM era. Certainly their skills transfer. (Not that brute forcing with shit tons of compute is standard "research funding").

And for the record I really wish more money was being thrown outside of LLM.

492. wiredpancake ◴[] No.45111419{3}[source]
A lot, but maybe a lot less than you expect.

You'd be competing with ASIC miners, which are 100x more cost effective per MH/s. You don't need 100,000GB of VRAM when mining GPU, therefore its waste.

493. wiredpancake ◴[] No.45111427{5}[source]
It means nothing when most water is recycled anyways. It's not like the GPUs actually drink the stuff, the water just connects to heatsinks and is cycled around.
replies(1): >>45111698 #
494. aoeusnth1 ◴[] No.45111440{5}[source]
All of the top upvoted comments are calling or implying this is a bubble. It seems to me the majority loves to imagine themselves persecuted in their home turf.
495. StephenHerlihyy ◴[] No.45111602{8}[source]
What's fun is that I have had Anthropic's AI support give me blatantly false information. It tried to tell me that I could get a full year's worth of Claude Max for only $200 dollars. When I asked if that was true it quickly backtracked and acknowledged it's mistake. I figure someone more litigious will eventually try to capitalize.
replies(1): >>45112519 #
496. StephenHerlihyy ◴[] No.45111625{4}[source]
My understanding is that model are already merely a confederation of many smaller sub-models being used as "tools" to derive answers. I am surprised that it took us this long to solve the "AI + Microservices = GOLD!" equation.
497. BobbyTables2 ◴[] No.45111695[source]
Until one day an outsider finds a new approach for LLMs that vastly reduces the computational complexity.

And then we’ll realize we wasted an entire Apollo space program to build an over-complicated autocompleter.

498. xnx ◴[] No.45111698{6}[source]
That's true for closed loop systems, but some data enters use evaporative cooling because it is more energy efficient.
499. BobbyTables2 ◴[] No.45111724{6}[source]
“Bobby, some things are like a tire fire: trying to put it out only makes it worse. You just gotta grab a beer and let it burn.”

– Hank Rutherford Hill

500. BobbyTables2 ◴[] No.45111730{4}[source]
Cloud is just “rent to own” without the “own” part.
501. 1oooqooq ◴[] No.45111747{4}[source]
hahahhahahahahahahahahhaha... oh wait, you're serious?
502. ChadNauseam ◴[] No.45111799{5}[source]
> The creation of wealth comes from trade, [...]

I'm not sure to what extent you meant this, but I don't know that I'd agree with it. Trade allows specialization which does increase wealth massively, no doubt. And because of how useful specialization is, all wealth creation involves trade somewhere. But specialization is just one component of wealth creation. It stands alongside labor, innovation, and probably others.

replies(1): >>45113053 #
503. asdffdasy ◴[] No.45111822{3}[source]
your crystal ball needs calibration. this round alone was 14pct (183/13)... so the dillution was likely over 20pct.
replies(1): >>45112010 #
504. jgraettinger1 ◴[] No.45111844{4}[source]
It still doesn't make sense. Cursor undoubtedly has smart engineers who could implement the Anthropic text editing tool interface in their IDE. Why not just do that for one of your most important LLM integrations?
replies(1): >>45118027 #
505. ath3nd ◴[] No.45111885[source]
> GPT-7 will need its own sovereign wealth fund

If the diminishing returns that we see now continue to prove true, ChatGPT6 will already be financially not viable so I doubt there will be GPT7 that can live up to the big version bump.

Many folks already consider GPT5 to be more like GTP4.1. I personally am very bearish on Anthropic and OpenAI.

506. tootie ◴[] No.45111904[source]
This is why Nvidia is the most valuable company in the world. Ultimately all these investment rounds for LLM companies are just going to be spent on Nvidia products.
507. ath3nd ◴[] No.45111912{6}[source]
And yet we fail to see an uptick of better and higher quality software, if anything, AI slop is making OSS owners reject AI prs because of their low quality.

I'd wager the personal failure rate when using LLMs is probably even higher than the 95% in enterprise, but will wait to see the numbers.

508. tdhz77 ◴[] No.45111914[source]
I applied for some of your open positions. Looks like they should be funded now.
509. tootie ◴[] No.45111948[source]
Margins of 60%? On inference maybe but that disappears when you price in model training.

This guy's analysis says they are bleeding out despite massive revenue

https://www.wheresyoured.at/anthropic-is-bleeding-out/

replies(1): >>45112080 #
510. mikewarot ◴[] No.45111971[source]
Most of that power usage is moving data and weights into multiply accumulate hardware, then moving the data out. The actual computation is a fairly small fraction of the power consumed.

It's quite likely that an order of magnitude improvement can be had. This is an enormous incentive signal for someone to follow.

511. masterjack ◴[] No.45112010{4}[source]
13/183=0.071 so how can it be 20pct for this round??
replies(1): >>45121360 #
512. ipnon ◴[] No.45112046[source]
The greatest failure of American capitalism today is that meteoric companies take decades to trade on public markets instead of years. Only the already wealthy and connected are allowed to invest in these kinds of opportunities. In the 80s and 90s Anthropic would have been trading on NYSE already. Zoomers have it rough ...
513. 15155 ◴[] No.45112048{4}[source]
This is why chiplets are used.
514. ankit219 ◴[] No.45112080{3}[source]
In Jan, when deepseek launched, Dario Amodei had to disclose they spent about $10M to train the last generation of models (his arguments was deepseek was on the curve, not breaking it).

They earned $250M in May based on ARR, and about $400M in july. Model training is going to be amortized over multiple years anyway. I am not privy to how much they spent, not going to comment on that. GM was public news, and hence I got that.

Re Zitron's analysis, I don't find them to be reliable or compelling.

replies(2): >>45112776 #>>45112789 #
515. majormajor ◴[] No.45112144{5}[source]
Do they have a function to predict in advance if the next model is going to be profitable?

If not, this seems like a recipe for bankruptcy. You are always investing more than you're making, right up until the day you don't make it back. Whether that's next year or in ten or twenty years. It's basically impossible to do it forever - there simply isn't enough profit to be had in the world if you go forward enough orders of magnitude. How will they know when to hop off the train?

replies(1): >>45113228 #
516. alluro2 ◴[] No.45112150{5}[source]
I'm genuinely curious about how you and other people with similar outlook see this playing out, as it would kind of provide hope.

Scenario: You are a medium level engineer, who got laid off from a company betting on AI to replace a significant portion of their junior/medium level developers. You were also employing a middle-aged woman, to help with the kids after school and around the house, until you and your wife come back from work. She now needed to be let go as well, as you can't afford her anymore. The same thing happened to a large portion of your peers and work in the same industry/profession is practically no longer available. This has ripple effects on your local market (restaurants, caffes, clothing stores etc).

How do you see this as empowering and a net positive thing for these people individually, and for the society? What do they do that replaces their previous income and empowers them to get back to the same level at least?

replies(2): >>45112404 #>>45117734 #
517. ◴[] No.45112200{6}[source]
518. jayd16 ◴[] No.45112210{6}[source]
Can you explain to me where Anthropic (or it's investors) expect to be making money if that's what it actually costs to run this stuff?
replies(1): >>45112785 #
519. oblio ◴[] No.45112270{5}[source]
> According to Dario, each model line has generally been profitable: i.e. $200MM to train a model that makes $1B in profit over its lifetime.

Surely the Anthropic CEO will have no incentive to lie.

replies(1): >>45112528 #
520. oblio ◴[] No.45112311{6}[source]
Because the competition hasn't gone out of business (at least outside the US where tons of other ride hailing apps are available in most major locales) and because 16 (SIXTEEN!!!) years after founding Uber is still net profit negative: over its lifetime it has lost more money than it made.

The only people that really benefited from Uber are:

- Uber executives

- early investors that saw the share price go up

- early customers that got VC subsidized rides

replies(1): >>45112728 #
521. oblio ◴[] No.45112318{6}[source]
Are there? Which ones? I'm especially interested in companies that weren't built to be sold.
522. petesergeant ◴[] No.45112321{3}[source]
gpt-oss-120b has cost OpenAI virtually all of my revenue, because I can pay Cerebras and Groq a fraction of what I was paying for o4-mini and get dramatically faster inference, for a model that passes my eval suite. This is to say, I think high-quality "open" models that are _good enough_ are a much bigger threat. Even more so since OpenRouter has essentially commoditized generation.

Each new commercial model needs to not just be better than the previous version, it needs to be significantly better than the SOTA open models for the bread-and-butter generation that I'm willing to pay the developer a premium to use their resources for generation.

523. barchar ◴[] No.45112404{6}[source]
Well, if everyone is unemployed there won't be much of a market for these newly AI enabled companies to sell into. Also, in the extreme, you'd have deflation such that it's worth hiring again. This would be very painful.

More likely automatic stabilizers and additional stimulative spending would have to happen in order to fully utilize all the new productive capacity (or reduce it, as people start to work less). It's politically hard to sustain double digit unemployment, and ultimately the government can always spend enough or cut enough taxes to get everyone employed or get enough people to leave the labor force.

524. oblio ◴[] No.45112406{3}[source]
24 months.
525. BrenBarn ◴[] No.45112438{4}[source]
The difference is once you bought one of those chips you could do your own innovation on top of it (i.e., with software) without further interference from those well-funded companies. You can't do that with GPT et al. because of the subscription model.
replies(1): >>45112812 #
526. delusional ◴[] No.45112441[source]
> The compute moat

Does this really describe a "most" ør are you just describing capital?

The capitalization is getting insane. Were basically at the point where you ned more capital than a small nations GDP.

That sounds mich more accurate to my ears, and much more troubling

527. barchar ◴[] No.45112449{7}[source]
It can also be a way to evade capital controls. You know you'll lose, but otoh you also know you'll probably not lose _everything_ and you buy in with RMB and cash out in HKD.
528. unethical_ban ◴[] No.45112483{4}[source]
Kill us off or else let us starve while they watch the world burn from their killer AI drone protected estates.
529. mlyle ◴[] No.45112507{3}[source]
They've gotta hope they get to cheap AGI, though.

Any stall in progress either on chips or smartness/FLOP means there's a lot of surplus previous generation gear that can hang and commoditize it all out to open models.

Just like how the "dot com bust" brought about an ISP renaissance on all the surplus, cheap-but-slightly-off-leading-edge gear.

IMO that's the opportunity for a vibrant AI ecosystem.

Of course, if they get to cheap AGI, we're cooked: both from vendors having so much control and the destabilization that will come to labor markets, etc.

530. nmfisher ◴[] No.45112511[source]
Were there many competitors to YouTube though? I remember Vimeo (still around) and Google Video (replaced by YouTube), but not much else.

Between OpenAI, Anthropic, Google, Facebook, xai, Microsoft, Mistral, Alibaba, DeepSeek, z.ai, Falcon, and many others, AI feels a lot more competitive.

531. slashdave ◴[] No.45112512{4}[source]
Oh, I see. Next time I make a mistake, I'll just skip the apology, and claim "I was inconsistent in applying a principle."

Yeah, I meant hypocritical. For some reason I couldn't find the right word.

replies(1): >>45120606 #
532. nielsbot ◴[] No.45112519{9}[source]
"Air Canada must honor refund policy invented by airline’s chatbot"

https://arstechnica.com/tech-policy/2024/02/air-canada-must-...

533. mlyle ◴[] No.45112520{3}[source]
> So long as there is competition it’ll be available at marginal cost.

Most things are not perfect competition, so you get MR=MC not P=MC.

We're talking about massive capital costs. Another name for massive capital costs are "barriers to entry".

replies(1): >>45116894 #
534. nielsbot ◴[] No.45112528{6}[source]
Not saying he's above lying, but I do believe there are potential legal ramifications from a CEO lying. (Assuming they get caught)
535. edg5000 ◴[] No.45112534{3}[source]
How can you dismiss the value of the tech so blatantly? Have you used Opus for general questions and coding?

> no idea whether it's actually benefiting someone "on the ground"

I really don't get it. Before, we were farmers plowing by hand, and now we are using tractors.

I do totally agree with your sentiment that it's still a horrible development though! Before Claude Code, I ran everything offline, all FOSS, owned all my machines, servers etc. Now I'm a subscription user. Zero control, zero privacy. That is the downside of it all.

Actually, it's just like the mechanisation of farming! Collectivization in some countries was a nightmare for small land owners who cultivated the land (probably with animals). They went from that to a more efficient, government controlled collective farm, where they were just a farm worker, with the land reclaimed through land reform. That was an upgrade for the efficiency of farming, needing fewer humans for it. But a huge downgrade for the individual small-scale land owners.

536. nielsbot ◴[] No.45112538{5}[source]
And don't forget the furnace furnace: gas/coal to power all this.
537. 3uler ◴[] No.45112551{3}[source]
20x earnings is not that insane for a fast growing startup. Now Tesla and Palantir at 100x earnings is insane.
replies(3): >>45112763 #>>45112783 #>>45115482 #
538. up2isomorphism ◴[] No.45112552[source]
There is no generational differences between these models. I tested cursors with all different backends and they are similar in most cases. So called race is just a Wall Street sensation to bump the stock price.
539. arduanika ◴[] No.45112560[source]
Ha.

Do you own any Amazon, Alphabet, or Salesforce, perhaps through some index fund? Congratulations, you own some Anthropic. This matters to you.

And market conditions matter to you, too. Every deal is a comparable mark that factors into every other deal. Where this tech is going, and whether we're in a bubble or just getting started... these are forces that are interested in you, even if you're not interested in them.

540. arduanika ◴[] No.45112567[source]
There's a logical limit to this "private markets are the new public markets" crap. Once they finish Series Z, they have to IPO. It's the law.
541. 3uler ◴[] No.45112585{5}[source]
That is 99% of software engineering, boring line of business CRUD applications or data pipelines.

Most creativity is just doing some slightly different riff on something done before…

Sorry to break it to you but most of your job is just context engineering for yourself.

542. joshcsimmons ◴[] No.45112639[source]
What is the business model? Has anyone answered that yet? Because selling $20 subscriptions ain't it.
543. thoughtpeddler ◴[] No.45112696[source]
Isn't this the whole premise of existing companies like SF Compute? [0]

[0] https://sfcompute.com/

544. klausa ◴[] No.45112697{3}[source]
My gut feeling is that Claude Code being so popular is: - 60% just having a much better UX and having any amount of "taste", compared to Cursor - 39,9% being able to subsidize the raw token costs compared to what's being billed to Cursor - 0,1% some magical advantage by also training the model

Claude Code is just much _pleasant_ to use than most other tools, and I think people are overly discounting that aspect of it.

I'd rather use CC with slightly dumber model, than Cursor with a slightly better one; and I suspect I'm far from being the only one.

545. simianwords ◴[] No.45112728{7}[source]
Are you predicting that they can't be net profitable?
replies(1): >>45113129 #
546. lelanthran ◴[] No.45112761{6}[source]
> You need a 100+gigs ram and a top of the line GPU to run legacy models at home. Maybe if you push it that setup will let you handle 2 people maybe 3 people.

This doesn't seem correct. I run legacy models with only slightly reduced performance on 32GB RAM with a 12GB VRAM GPU right now. BTW, that's not an expensive setup.

> You think anyone is going to make money on that vs $20 a month to anthropic?

Why does it have to be run as a profit-making machine for other users? It can run as a useful service for the entire household, when running at home. After all, we're not talking about specialised coding agents using this[1], just normal user requests.

====================================

[1] For an outlay of $1k for a new GPU I can run a reduced-performance coding LLM. Once again, when it's only myself using it, the economics work out. I don't need the agent to be fully autonomous because I'm not vibe coding - I can take the reduced-performance output, fix it and use it.

replies(2): >>45118444 #>>45122107 #
547. bigyabai ◴[] No.45112763{4}[source]
Tesla and Palantir are both propped up by the federal government, kinda poor examples.
replies(1): >>45112969 #
548. kgwgk ◴[] No.45112776{4}[source]
> Model training is going to be amortized over multiple years anyway.

Claude 4 launch was not even fifteen months after the launch of Claude 3 (which is discontinued). The “multiple” is 1.2 - I wouldn’t call that “multiple years”.

549. lelanthran ◴[] No.45112785{7}[source]
> Can you explain to me where Anthropic (or it's investors) expect to be making money if that's what it actually costs to run this stuff?

Not the GP (in fact I just replied to GP, disagreeing with them), but I think that economies of scale kick in when you are provisioning M GPUs for N users and both M and N are large.

When you are provisioning for N=1 (a single user), then M=1 is the minimum you need, which makes it very expensive per user. When N=5 and M is still 1, then the cost per user is roughly a fifth of the original single-user cost.

550. singron ◴[] No.45112789{4}[source]
It doesn't make sense to amortize model training over multiple years since they train multiple models per year (e.g. Claude 3.5, 3.7, and 4 were released within 12 months). Or you can, but then you have to overlap amortization schedules multiple times over. E.g. if they amortized over 24 months, then they would still be amortizing Claude 2.1, 3, 3.5, 3.7, 4, and 4.1.
551. almogo ◴[] No.45112812{5}[source]
Yes you can? Sure you can't run GPT5 locally, but get your hands on a proper GPU and you can run some still very sophisticated local inference.
replies(1): >>45122817 #
552. 3uler ◴[] No.45112969{5}[source]
Yeah maybe, but the point is there are plenty of public companies trading at a 20x earnings multiple
553. lelanthran ◴[] No.45113053{6}[source]
>> > The creation of wealth comes from trade, [...]

> I'm not sure to what extent you meant this, but I don't know that I'd agree with it.

At a very foundational level, all wealth comes from trade, even when there is no currency involved.

When two parties voluntarily make a trade, each party gets more value out of the trade than they had before, so the sum total of value after the trade is, by definition alone, greater than the sum total of value before the trade.

Small example: I offer to trade you a bag of potatoes for 2 hours of your time to fix my tractor, and you accept.

This trade only happens because:

1. I value a running tractor more than I value my bag of potatoes

2. You value a bag of potatoes more than you value 2 hours of your time.

After the trade is done, I have more value (running tractor) and you have more value (a bag of potatoes), hence the total value after the trade is more than the total value before the trade.

The only thing that creates value is trade. It's the source of value.

replies(2): >>45113646 #>>45114467 #
554. atleastoptimal ◴[] No.45113116{3}[source]
>Anyone who disagrees with me is a shill account
555. lelanthran ◴[] No.45113119{6}[source]
> The risk involved in many small 51% bets is very different from the risk in a single all-or-nothing 51% bet.

Right, but parent didn't say anything about an all-in bet, just double-or-nothing on a positive EV bet.

Frankly, I'd repeatedly bet on a positive EV bet too; it's a guaranteed win if you're allowed to go on for as long as you want to.

556. atleastoptimal ◴[] No.45113128{5}[source]
Is there an open source Uber? There are multiple open source AI models far beyond what SOTA was just 1 year ago. Even if they don't manage to drive prices down on the most recent closed models, they themselves will never be a trivial amount more than the compute they run on, and compute will only get more expensive if demand for AI continues to grow exponentially, which would likewise drive prices down due to competitive pressure.
replies(1): >>45125891 #
557. oblio ◴[] No.45113129{8}[source]
No, I'm predicting that:

1. opportunity costs are a thing.

2. if you add Uber's financial numbers since creation, the crazy amount of VC that was invested Uber would have provided better returns by investing it in the S&P 500.

3. Uber will settle in as a boring, profitable company that's going to be a side note in both the history of tech and also of transportation and will primarily be remembered for eroding worker rights.

replies(1): >>45113631 #
558. ikr678 ◴[] No.45113228{6}[source]
Back in my day, we called this a pyramid scheme.
559. aurareturn ◴[] No.45113352{3}[source]
I'm pretty sure Nemotron models are their internal teams dogfooding to learn more about the latest AI advancements in software.
560. utyop22 ◴[] No.45113429{4}[source]
Lol youre missing the point.

The WeWork investment proved Son never has an investment thesis - other than spray and pray.

561. utyop22 ◴[] No.45113608{4}[source]
Yeah you have to be careful on the internet in general TBH.

This place is better than much of the internet but still. Ah the dream would be to have this place somehow be filled with the experts on all topics and let them duel it out.

562. simianwords ◴[] No.45113631{9}[source]
I don't get your point. You would have still made more money investing in Uber than in S&P.
replies(1): >>45113986 #
563. utyop22 ◴[] No.45113646{7}[source]
Ultimately real wealth is all about enhancing the well-being of individuals in society by way of an exchange of assets.

All we have done is become more elaborate and sophisticated in this stuff but at the core, its been the same throughout much of time.

564. utyop22 ◴[] No.45113709{3}[source]
Because there were external forces that helped propel and keep YT aloft. If smartphones and so on had not come into existence, it would have crashed and burned.

"Uber is similar - they shouldn't exist, they basically broke the law in most countries they moved into, brazenly violated all kinds of barriers that kept taxi industry completely entrenched for decades"

And here's a simple way to demonstrate my point - backed by VC - Uber accelerated its growth and got to the point it was so widely adopted nobody could stop them from operating.

565. utyop22 ◴[] No.45113732{3}[source]
"The point of these VC funds is to lose most of the time and win big very rarely."

Sure, but in my view, I think we are on the downtrend now and this line of thinking has been taken way too far.

566. ZephyrBlu ◴[] No.45113747{5}[source]
__350x-ing__, not 35
567. utyop22 ◴[] No.45113754{3}[source]
I agree to an extent. But this stuff is actually super hard to do. Humans tend not to like doing the hard stuff.
568. utyop22 ◴[] No.45113775[source]
Comparative advantage.
569. illiac786 ◴[] No.45113827[source]
I sincerely hope this whole LLM monetization scheme crashes and burns down on these companies.

I really hope we can get to a point where modest hardware will achieve similar results for most tasks and these insane amount of hardware will only be required for the most complex requests only, which will be rarer, thereby killing the business case.

I would dance the Schadenfreude Opus in C major if that became the case.

570. rich_sasha ◴[] No.45113853{4}[source]
All these businesses looked incredibly unsustainable for a long time. Uber was a cash shredder. Amazon didn't turn a profit for years, IIRC. They became profitable essentially by becoming quasi-monopolies.

Indeed, LLM companies likely turn operating profits, but I'm not sure that alone justifies their valuations. It's one thing to make money, it's another to make a return for investors.

And sure, valuations are growing faster than you can blink. Time will show if this in turn is justifiable or a bubble.

replies(1): >>45116618 #
571. ricardobayes ◴[] No.45113886{7}[source]
It's an interesting case. IMO LLMs are not a product in the classical sense, companies like Anthropic are basically doing "basic research" so others can build products on top of it. Perhaps Anthropic will charge a royalty on the API usage. I personally don't think you can earn billions selling $500 subscriptions. This has been shown by the SaaS industry. But it is yet to be seen whether the wider industry will accept such royalty model. It would be akin to Kodak charging filmmakers based on the success of the movie. Somehow AI companies will need to build a monetization pipeline that will earn them a small amount of money "with every gulp", if we are using a soft drink analogy.
572. utyop22 ◴[] No.45113888{4}[source]
Is it though? Its only sustainable to the extent that there is easy access to funding on favourable terms...
573. utyop22 ◴[] No.45113911{4}[source]
Yes indeed if we look at it from this equation:

FCFF = EBIT(1-t) - Reinvestment

If the hardware needs constant replacement, that Reinvestment number will always remain higher than what most people think.

In fact, it seems none of these investments are fixed. Therefore there are no economies of scale (as it stands right now).

574. HellDunkel ◴[] No.45113939{4}[source]
You completly forgot about the invention of the home computer. If we would have all been loging into some mainframe computer using a home terminal your assessment would be correct.
575. Printerisreal ◴[] No.45113966{11}[source]
they lie about real inflation, everywhere
replies(1): >>45117844 #
576. oblio ◴[] No.45113986{10}[source]
No, you wouldn't have, unless you were one of handful VCs or Uber execs (ok, and a bunch of pre-IPO Uber employees).

Uber IPO May 2019: market cap $82bn. Uber now: $193bn. 2.35x multiplier.

S&P 500 May 2019: $2750. S&P 500 now: $6460. 2.35x multiplier.

So the much, much riskier Uber investment has barely matched a passive S&P 500 investment over the same time frame. And the business itself has lost money, more money was put into it than has been gotten back so far.

I'm not even sure why I'm in this conversation as it seems ideological. I bring up facts and you bring up... vibes?

replies(1): >>45114264 #
577. conartist6 ◴[] No.45114113{3}[source]
Come to the counterrevolution; we have cookies : )
578. simianwords ◴[] No.45114264{11}[source]
Let me get this straight.

I was replying to this: "So far Lyft seems to be doing okay, which proves the business plan doesn't really work." when I said Uber is profitable

Your retort to that was S&P grew more than Uber, which is a nonsensical argument. Our standard for what is a good business is if it grows faster than S&P after going public?

Edit: I dug up some research related to this, most companies do worse than S&P after becoming public. What's your point then?

579. elbear ◴[] No.45114467{7}[source]
There are people who trade at a deficit just because they believe that's the only viable option. They see commerce as a zero-sum game.
replies(1): >>45115356 #
580. simgt ◴[] No.45114488{3}[source]
Yeah. Electricity production that we also need for electrifying all of our transports and industry because of climate change. Good luck getting that cheaper.

That said innovation on the model side is more likely to come from a 10B-funded startup that still has some money to spare on the brightest researchers on top of giving them all the data and compute they want to play with.

replies(1): >>45126361 #
581. euLh7SM5HDFY ◴[] No.45114515[source]
Bubble will only burst when market runs out of money or new model releases provide little to none improvements. Or someone actually creates a real AGI. No matter how low that chance is, the FOMO among investors must be crazy.
582. acdha ◴[] No.45114565{6}[source]
That could be, I was definitely thinking of management hierarchy since that difference has been so striking with AI.

A lot of my awareness started in the academic HPC world which was a bit ahead in needing high capacity of generic resources but it felt like this came from the edges rather than the major IT giants. Companies like IBM, Microsoft, or HP weren’t doing it, and some companies like Oracle or Cisco appeared to thought that infrastructure complexity was part of their lock on enterprise IT departments since places with complex hand run books weren’t quick to switch vendors.

Amazon at the time wasn’t seen as a big tech company - they were where you bought CDs – and companies like Joyent or Rackspace had a lot of mindshare as well before AWS started offering virtual compute in 2006. One big factor in all of this was that x86 virtualization wasn’t cheap until the mid-to-late 2000s so a lot of people weren’t willing to pay high virtualization costs, but without that you’re talking services like Bingodisk or S3 rather than companies migrating compute loads.

583. fancyfredbot ◴[] No.45114647{5}[source]
Agree - as I said it's a bad investment with hindsight.

If you don't have hindsight then passing on FTX probably implies passing on some successful opportunities too. So another opportunity cost and possibly a larger one.

584. AbstractH24 ◴[] No.45115181[source]
I’m not sure if this is impressive or terrifying.

Impressive in the valuation, terrifying in the fact that they need to keep raising and these valuations might not prove justifiable

585. AbstractH24 ◴[] No.45115194[source]
Where are we in that cycle though? How close to the top?
586. lelanthran ◴[] No.45115356{8}[source]
> There are people who trade at a deficit just because they believe that's the only viable option.

It's not a deficit if the value they assign to what they get is higher than the value they assign to what they give.

If they are giving away something they value highly for something they value less highly, then it's not a voluntary trade, now is it?

587. rsynnott ◴[] No.45115482{4}[source]
20x _earnings_ wouldn’t be, but that’s not what we’re talking.
588. rsynnott ◴[] No.45115538{5}[source]
Probably not a total confidence; it’s EA/rationalist theory taken to ludicrous extremes in both cases.
replies(1): >>45117620 #
589. Razengan ◴[] No.45116002{4}[source]
It's literally a scene from The Matrix.
replies(1): >>45119768 #
590. aqme28 ◴[] No.45116228{5}[source]
No, I'm calling out the person who is comparing those things.
591. DSingularity ◴[] No.45116416{4}[source]
What hardware do you use ?
592. cruffle_duffle ◴[] No.45116524{3}[source]
Their employers are paying that money. The jury is still out on how wisely that money is being spent.
593. tim333 ◴[] No.45116529{5}[source]
I'm not sure many investors are investing their own money. They are investing other people's money, maybe owned by shareholders of large companies in turn owned by our pension funds.
replies(1): >>45117057 #
594. filoleg ◴[] No.45116618{5}[source]
Cannot speak for the rest, but the whole “Amazon didn’t turn a profit for years” (as an argument about their profitability now coming solely through quasi-monololy routes) is incredibly misleading and bordering on disingenuous.

Since before AWS was even a thing, Amazon was already turning up great revenue and could’ve easily just stopped expanding and investing into the company growth, and they would be profitable easily. Instead, Amazon decided to reinvest all their potential profits into growth/expansion (with the favorable tax treatment on top) at the expense of keeping the cash profits. At any given point, Amazon could’ve stopped reinvesting all potential profits into their growth, and they would be instantly profitable.

This is not the same as Uber, which ran their core service operations at a net loss (and was only cheap due to their investors eating the difference and hoping that Uber will eventually figure out how to not lose money on operating their core service).

replies(1): >>45117308 #
595. utyop22 ◴[] No.45116883{3}[source]
Maybe in enterprise.

But in the consumer market segment, for most cases, its all about who is cheapest (free preferably) - aside from the few loonies who care about personality.

The true lasting economic benefits within enterprise are yet to play out. The trade off between faster code production vs poorer maintained code is yet to play out.

replies(1): >>45120232 #
596. cjbgkagh ◴[] No.45116894{4}[source]
Granted that capital costs are a barrier to entry and that barriers to entry leads to non-perfect competition, but the exploitability is limited in the case of LLMs because they exist on a sub-linear utility scale. In LLMs 2x the price is not 2x as useful, this means a new entrant can enter the lower end of the market and work their way up. The only way to prevent that is for the incumbent to keep costs as close to marginal as possible.

There is a natural monopoly aspect given the ability to train and data mine on private usage data but in general improvements in the algorithms and training seem to be dominating advancements. Microsoft's search engine Bing paid an absolute fortune for access to usage data and they were unable to capitalize on it. LLMs have the unusual property that a lot of value can be extracted out of fine tuning for a specialized purposes which opens the door to a million little niches providing fertile ground for future competitors. This is one area where being a fast follower makes a lot of sense.

replies(1): >>45118323 #
597. sdesol ◴[] No.45117057{6}[source]
It might not be their money, but they are paid a management fee and if they cannot provide some return, people will stop using them.
replies(1): >>45125965 #
598. rich_sasha ◴[] No.45117308{6}[source]
Ok, we can debate on Uber, but your take on Amazon is very similar to today's LLM providers. They too are making good revenue on their product, but put so much cash into growth that they at least appear to be running at a loss.
599. rsynnott ◴[] No.45117620{6}[source]
*coincidence
replies(1): >>45117976 #
600. Eiriksmal ◴[] No.45117717{3}[source]
https://www.sec.gov/enforcement-litigation/litigation-releas...

A fascinating investor. I just finished re-reading Microserfs. The buzzwords may have changed between 1993 and 2025, but the human behaviors certainly have not.

601. harmmonica ◴[] No.45117734{6}[source]
I totally share your concern, but I think there's reason for hope assuming it's not Terminator-style AGI that destroys the world (bigger problems than unemployment in that case). Specific to your scenario, it seems like companies are laying people off today in the name of AI efficiency gains (that in itself is debatable, but let's assume that is why they're doing it--they think they can do the same if not more with less). But if you play out those same efficiency gains companies that are in growth mode ought to be able to use those efficiency gains to accelerate product development. So instead of laying people off, companies will be able to build product that much faster because their employees, and engineers in particular, can move so much more quickly. We're so early, though, and c-suite folks are so myopic that the troops haven't yet had time to show them that revenue growth is the real prize of AI/LLM's (and believe me it's always the some troops that show them the way).

On a larger level, I would just ask your fictitious medium-level engineer what are they able to do today, with an AI/LLM, that they were unable to do before? As a very basic example, and one that is already true with existing LLM's, a mid-level engineer who wanted to build an app might've formerly struggled with building a UI for their app. Now, sans designer, a mid-level engineer can spin up an app UI much more quickly, and without the labor of finding and actually paying a designer. That's not to say there's no value left in design, but if you're starting out it's similar to how bootstrap (dating myself here) was an enabler because you were no longer in need of a designer to build a website (was still a huge time suck and pain in the ass though). You can multiple that by a bunch of roles and tasks today because LLM's make it possible to do things you just formerly wouldn't have been able to do on your own.

Last thing is the much more high level. Every time some new tech is introduced there's a lot of concern about displacement. I think, again, that's valid and perhaps moreso with AI. But it does seem to me like major new tech always seems to create a lot of opportunity. It might not be for the exact same people like your mid-level engineer (although I think it might for him/her), but I stay hopeful that the amount of opportunity created will offset the amount of suffering it will cause. And I don't say that in some kind of "suffering is ok" way, but just like revenue growth is the be all end all for so many companies, tech brings change and some suffering is a part of that. Prior skills become less important, new skills are preferred. Some folks adapt. Others thrive. Some are left behind.

If you're still checking in on this thread, and you actually read my diatribe, do you think I'm totally full of it? Again, I don't know that I would bet it would work out this way. Actually I probably would bet on that. But I'm definitely hopeful it will.

602. arcticbull ◴[] No.45117844{12}[source]
They really don't. You can run the numbers yourself. All the prices that it's computed from are public, the methodology is public and it's dead easy to backtest. Dead. Easy. (1 + (Inflation / 100)) ^ (Years). Inflation would be the dumbest possible thing to lie about because it's so damn easy to check.

The conversation always goes like this.

You: "The government is lying about inflation!"

Me: "Ok, what rate do you think it's actually been?"

You: "10%!"

Me: "So you're telling me inflation over the last 30 years was 1700%? So prices are now 17X higher than in 1995? You sure?"

Then we look up historical prices like this.

https://www.tasteofhome.com/collection/this-is-what-grocerie...

In 1995 ground beef was $1.49/lb.

Bread was $.89/loaf.

Eggs were $0.92/doz.

Milk was $2.50/gal.

idk if you're shopping at Erewhon but where I shop ground beef isn't $25/lb, bread isn't $15/loaf, eggs, well, you got me there lol, and milk isn't $42.50/gal.

Unless the conspiracy is far bigger than we think, or "they" are everywhere, whoever "they" are, I think it's safe to assume that inflation numbers have been pretty accurate.

603. arduanika ◴[] No.45117976{7}[source]
Yeah, not a total coincidence of thinking style. I just don't think it's likely that SBF was literally thinking about Roko's post as he did the crimes.
604. mritchie712 ◴[] No.45118027{5}[source]
I agree it doesn't make sense. I'd think they could alias their own tools to match Anthropic's, but my guess is they don't want to customize too heavily on any given model.
605. arcticbull ◴[] No.45118284{10}[source]
No, it doesn't. Deficit spending does not create new money. Deficit spending borrows existing money from people in the economy who already have it, and gives it out in exchange for a share of future revenues. The Fed does not participate in Treasury primary auctions and does not monetize the debt as a means of funding government operations.

Think about it this way: if money were just created to fund the deficit why would we have a debt? That's double-counting. You can invalidate your hypothesis very easily: the M2 money supply is about half the size of the debt. It's not possible to square that circle unless deficit spending was re-pledging existing money.

606. mlyle ◴[] No.45118323{5}[source]
Almost anything has a utility scale which is diminishing. But we still see MR=MC pricing in industries with barriers to entry (IPR, capital costs). TSMC and Mercedes don't price cheap to avoid giving others a toehold.

> There is a natural monopoly aspect given the ability to train and data mine on private usage data but in general improvements in the algorithms and training seem to be dominating advancements.

There's pretty big economies of scale with inference-- the magic of how to route correctly with experts to conduct batching while keeping latency low. It's an expensive technology to create, and there's a large minimum scale where it works well.

replies(1): >>45124171 #
607. jayd16 ◴[] No.45118444{7}[source]
Plus, when you're hosting it yourself, you can be reckless with what you feed it. Pricing in the privacy gain, it seems like self hosting would be worth the effort/cost.
608. Centigonal ◴[] No.45119488[source]
The human brain also doesn't take 6 months to train to a highly productive level. There is a level of time-compression happening here.
609. psychoslave ◴[] No.45119768{5}[source]
Yes it is. We can also maybe agree that the comment wasn't implying otherwise?

I mean, it's like the djin giving you three whishes, and not a single character will ask "what's the two best wishes I can do to (ensure mankind will reach perpetually best peaceful harmonious flourishing social dynamics forever| whatever goal the character might have as greatest hope)". When you have a instant perfect knowledge acquisition machine at disposal, the first thing to obviously understand is what the most important things to do to reach your goal.

The film didn't mention everything Neo learned like that though, just that he accumulate straight forward for many hours. Wouldn't be an action movie, certainly you would hope the character first words after such an impressive feat wouldn't be "I know kung fu".

610. xpe ◴[] No.45120232{4}[source]
> But in the consumer market segment, for most cases, its all about who is cheapest (free preferably) - aside from the few loonies who care about personality.

On what basis do you know this? Or more like your personal impression — based on asking how many people? Your friends?

replies(1): >>45120455 #
611. chermi ◴[] No.45120332{4}[source]
Ignoring energy costs(!), I'm interested in the following. Say every server generation from nvda is 25% "better at training", by whatever metric (1). Could you not theoretically wire together 1.25 + delta more of the previous generation to get the same compute? The delta accounts for latency/bandwidth from interconnects. I'm guessing delta is fairly large given my impression of how important HBM and networking are.

I don't know the efficiency gains per generation, but let's just say to get the same compute with this 1.25+delta system requires 2x energy. My impression is that while energy is a substantial cost, the total cost for a training run is still dominated by the actual hardware+infrastructure.

It seems like there must be some break even point where you could use older generation servers and come out ahead. Probably everyone has this figured out and consequently the resale value of previous gen chips is quite high?

What's the lifespan at full load of these servers? I think I read coreweave deprecates them (somewhat controversially) over 4 years.

Assuming the chips last long enough, even if they're not usable for LLM training/serving inference, can't they be reused for scientific loads? I'm not exactly old, but back in my PhD days people were building our own little GPU clusters for MD simulations. I don't think long MD simulations are the best use of compute these days, but there's many similar problems like weather modeling, high dimensional optimization problems, materials/radiation studies, and generic simulations like FEA or simply large systems of ODEs.

Are these big clusters being turned into hand-me-downs for other scientific/engineering problems like above, or do they simply burn them out? What's a realistic expected lifespan for a B200? Or maybe it's as simple as they immediately turn their last gen servers over to serve inference?

Lot of questions, but my main question is just how much the hardware is devalued once it becomes previous gen. Any guidance/references appreciated!

Also, anyone still in the academic computing world, do people like de shaw still exist trying to run massive MD simulations or similar? Do the big national computing centers use the latest greatest big Nvidia AI servers or something a little more modest? Or maybe even they're still just massive CPU servers?

While I have anyone who might know, whatever happened to that fad from 10+ years ago saying a lot of compute/algorithms would be shifting toward more memory-heavy models(2). Seems like it kind of happened in AI at least.

(1) Yes I know it's complicated, especially with memory stuff.

(2) I wanna say it was ibm Almaden championing the idea.

replies(1): >>45121676 #
612. marstall ◴[] No.45120413[source]
As part of a longer conversation, i asked chatgpt when GPT5-level capabilities will be cheap enough to include in gift cards, throwaway plastic toys, etc. Answer was 2030-35. https://chatgpt.com/share/68b8ac92-ad28-8008-b2f4-5b1d777558... ... conversation went on to envision a future where trillions of discarded full-gpt5 chips litter landfills and fungi learn to power them up and incorporate their knowledge into their biome, but you can just ignore that part
613. xpe ◴[] No.45120431{8}[source]
>>>> different commenter above: It is all fake and made up, and the numbers are detached from the real world, but it's not like the market doesn't know that.

>>> me: Perhaps there are salient differences between art on a wall and a company.

>> you: At heart, not really. The whole point of all of this is to motivate humans to get off their butt and reduce entropy.

> me: A painting on a wall is merely an inanimate object. / A company has agency; it seeks to add economic value to itself over time including changing people’s perceptions.

The Horror! Just look at the disjointed conversational history above. It seems like some sort of drunken history episode where people aren’t paying attention to each other.

Should I assume you are trying to understand what I’m saying? It is becoming less plausible with every comment. (I’m referring to the “be charitable” part of HN guidelines.)

Additionally, there is another anti-pattern at work here: this seems like a pretty inane definitional argument. You’re claiming there’s no difference between art on a wall and a corporation entity? By what definition? What is the utility of your definition; meaning, what can you do with your definition that provides differential predictive power?

My claim: when it comes to valuation, an agent is sufficiently different from a non-agent (yes, even if it appreciates!) What is the criteria for “sufficiently different”? To explain: if you get more benefit out of a distinction than it costs you to make the distinction, it is a net benefit.

In this case about valuing things, someone who makes a living building predictive valuation models is going to distinguish wall art from corporate entities because doing so is useful for prediction.

Of course they have some things in common. This is irrelevant to the question of “is making this distinction worth it?” As long as predicting the difference between them is valuable paying attention to the distinction is valuable.

This kind of talking past each other is one of many reasons “why we can’t have nice things” such as useful discussion. Shameful.

If you propose some grand unified theory that says two things ultimately derive from the same thing, that’s fine, but if you’re going to use it for prediction you’ll have to explain how to apply it.

614. utyop22 ◴[] No.45120455{5}[source]
I have a wide ranging sample of folks I've spoken to and observed their usage.
615. mvdtnz ◴[] No.45120606{5}[source]
Just so you know you are arguing with LLM output, not a human.
616. 1oooqooq ◴[] No.45121360{5}[source]
can't see your point. this is already close to .1 so it can only go up. since it's series F it would be very likely to be at .2 as the parent said. but i would guess close to .5 or more by now. not counting all they had dole out to employees thanks to fb
617. SchemaLoad ◴[] No.45121676{5}[source]
I'm not the one building out datacenters but I believe the power consumption is the reason for the devaluation. It's the same reasons we saw bitcoin miners throw all their ASICs in the bin every 6 months. At some point it becomes cheaper to buy new hardware than to keep running the old inefficient chips, when the power savings of new chips exceed the purchase price of the new hardware.

These AI data centers are chewing up unimaginable amounts of power, so if nvidia releases a new chip that does the same work in half the power consumption. That whole datacenter of GPUs is massively devalued.

The whole AI industry is looking like there won't be a first movers advantage, and if anything there will be a late mover advantage when you can buy the better chips and skip burning money on the old generations.

618. tick_tock_tick ◴[] No.45122107{7}[source]
Just your GPU not counting the rest of the system costs 4 years of subscription and with the sub you get the new models where your existing hardware will likely not be able to run it at all.

It's closer to $3k to build a machine that you can reasonable use which is 12 whole years of subscription. It's not hard to see why no one is doing it.

replies(1): >>45124259 #
619. BrenBarn ◴[] No.45122817{6}[source]
You can do some, but many of them have license restrictions that prevent you from using them in certain ways. I can buy an Intel chip and deliberately use it to do things that hurt Intel's business (e.g., start a competing company). The big AI companies are trying very hard to make that kind of thing impossible by imposing constraints on the allowed uses of their models.
620. cjbgkagh ◴[] No.45124171{6}[source]
I’m unconvinced that the lessons learned from scaling will constitute much of a moat. There is certainly an incentive for incumbents to give such an impression.
621. lelanthran ◴[] No.45124259{8}[source]
> Just your GPU not counting the rest of the system costs 4 years of subscription

With my existing setup for non-coding tasks (GPU is a 3060 12GB which I bought prior to wanting local LLM inference, but use it now for that purpose anyway) the GPU alone was a once-off ~$350 cost (https://www.newegg.com/gigabyte-windforce-oc-gv-n3060wf2oc-1...).

It gives me literally unlimited requests, not pseudo-unlimited as I get from ChatGPT, Claude and Gemini.

> and with the sub you get the new models where your existing hardware will likely not be able to run it at all.

I'm not sure about that. Why wouldn't the new LLM models run on a 4yo GPU? Wasn't a primary selling point of the newer models being "They use less computation for inference"?

Now, of course there are limitations, but for non-coding usage (of which there is a lot) this cheap setup appears to be fine.

> It's closer to $3k to build a machine that you can reasonable use which is 12 whole years of subscription. It's not hard to see why no one is doing it.

But there are people doing it. Lots, actually, and not just for research purposes. With the costs apparently still falling, with each passing month it gets more viable to self-host, not less.

The calculus looks even better when you have a small group (say 3 - 5 developers) needing inference for an agent; then you can get a 5060ti with 16GB RAM for slightly over $1000. The limited RAM means it won't perform as well, but at that performance the agent will still capable of writing 90% of boilerplate, making edits, etc.

These companies (Anthropic, OpenAI, etc) are at the bottom of the value chain, because they are selling tokens, not solutions. When you can generate your own tokens continuously 24x7, does it matter if you generate at half the speed?

replies(1): >>45124355 #
622. tick_tock_tick ◴[] No.45124355{9}[source]
> does it matter if you generate at half the speed?

Yes, massively it's not even linear 1/2 speed is probably 1/8 or less the value of "full speed". It's going to be even more pronounced as "full speed" gets faster.

replies(1): >>45125212 #
623. lelanthran ◴[] No.45125212{10}[source]
> Yes, massively it's not even linear 1/2 speed is probably 1/8 or less the value of "full speed". It's going to be even more pronounced as "full speed" gets faster.

I don't think that's true for most use-cases (content generation, including artwork, code/software, reading material, summarising, etc). Something that takes a day without an LLM might take only 30m with GPT5 (artwork), or maybe one hour with Claude Code.

Does the user really care that their full-day artwork task is now one hour and not 30m? Or that their full-day coding task is now only two hours, and not one hour?

After all, from day one of the ChatGPT release, literally no one complained that it was too slow (and it was much slower than it is now).

Right now no one is asking for faster token generation, everyone is asking for more accurate solutions, even at the expense of speed.

624. reissbaker ◴[] No.45125350{8}[source]
Ed Zitron plainly has no idea what he's talking about. For example:

Putting aside the hype and bluster, OpenAI — as with all generative AI model developers — loses money on every single prompt and output. Its products do not scale like traditional software, in that the more users it gets, the more expensive its services are to run because its models are so compute-intensive.

While OpenAI's numbers aren't public, this seems very unlikely. Given open-source models can be profitably run for cents per million input tokens at FP8 — and OpenAI is already training (and thus certainly running) in FP4 — even if the closed-source models are many times bigger than the largest open-source models, OpenAI is still making money hand over fist on inference. The GPT-5 API costs $1.25/million input tokens: that's a lot more than it takes in compute to run it. And unless you're using the API, it's incredibly unlikely you're burning through millions of tokens in a week... And yet, subscribers to the chat UI are paying $20/month (at minimum!), which is much higher than a few million tokens a week cost.

Ed Zitron repeats his claim many, many, excruciatingly many times throughout the article, and it seems quite central to the point he's trying to make. But he's wrong, and wrong enough that I think you should doubt that he knows much about what he's talking about.

(His entire blog seems to be a series of anti-tech screeds, so in general I'm pretty dubious he has deep insight into much of anything in the industry. But he quite obviously doesn't know about the economics of LLM inference.)

625. ◴[] No.45125529[source]
626. xigoi ◴[] No.45125891{6}[source]
> There are multiple open source AI models far beyond what SOTA was just 1 year ago.

There are many models that call themselves open source, but the source is nowhere to be found, only the weights.

627. tim333 ◴[] No.45125965{7}[source]
The kind of thing that happens is Joe Bloggs runs the Fidelity Hot Tech fund, up 50% over the last three years. Then when it crashes that's closed and Joe is switched to the Fidelity Safe Income fund with no down years for the last five years.
628. ryukoposting ◴[] No.45126361{4}[source]
I was thinking about this last night, and I find it amusing. Imagine investing in a company because it has money... your money. That you invested.