Most active commenters
  • SJC_Hacker(5)
  • dvfjsdhgfv(5)
  • johnnyanmac(5)
  • fkyoureadthedoc(4)
  • Centigonal(4)
  • ghc(4)
  • jsnell(4)
  • throwawayoldie(4)
  • andrewflnr(3)
  • jdiff(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 164 comments | | HN request time: 1.982s | source | bottom
Show context
lsy ◴[] No.44568114[source]
I think two things can be true simultaneously:

1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.

2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.

replies(26): >>44568145 #>>44568416 #>>44568799 #>>44569151 #>>44569734 #>>44570520 #>>44570663 #>>44570711 #>>44570870 #>>44571050 #>>44571189 #>>44571513 #>>44571570 #>>44572142 #>>44572326 #>>44572360 #>>44572627 #>>44572898 #>>44573137 #>>44573370 #>>44573406 #>>44574774 #>>44575820 #>>44577486 #>>44577751 #>>44577911 #
1. alonsonic ◴[] No.44570711[source]
I'm confused with your second point. LLM companies are not making any money from current models? Openai generates 10b USD ARR and has 100M MAUs. Yes they are running at a loss right now but that's because they are racing to improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their massive user base you think they don't have a successful business model? People use this tools daily, this is inevitable.
replies(11): >>44570725 #>>44570756 #>>44570760 #>>44570772 #>>44570780 #>>44570853 #>>44570896 #>>44570964 #>>44571007 #>>44571541 #>>44571655 #
2. airstrike ◴[] No.44570725[source]
No, because if they stop to focus on optimizing and minimizing operating costs, the next competitor over will leapfrog them with a better model in 6-12 months, making all those margin improvements an NPV negative endeavor.
3. bbor ◴[] No.44570756[source]
It’s just the natural counterpart to dogmatic inevitabilism — dogmatic denialism. One denies the present, the other the (recent) past. It’s honestly an understandable PoV though when you consider A) most people understand “AI” and “chatbot” to be synonyms, and B) the blockchain hype cycle(s) bred some deep cynicism about software innovation.

Funny seeing that comment on this post in particular, tho. When OP says “I’m not sure it’s a world I want”, I really don’t think they’re thinking about corporate revenue opportunities… More like Rehoboam, if not Skynet.

replies(1): >>44570924 #
4. mc32 ◴[] No.44570760[source]
Making money and operating at a loss contradict each other. Maybe someday they’ll make money —but not just yet. As many have said they’re hoping capturing market will position them nicely once things settle. Obviously we’re not there yet.
replies(1): >>44570844 #
5. BolexNOLA ◴[] No.44570772[source]
> that's because they are racing improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their user base you think they don't have a successful business model?

I imagine they would’ve flicked that switch if they thought it would generate a profit, but as it is it seems like all AI companies are still happy to burn investor money trying to improve their models while I guess waiting for everyone else to stop first.

I also imagine it’s hard to go to investors with “while all of our competitors are improving their models and either closing the gap or surpassing us, we’re just going to stabilize and see if people will pay for our current product.”

replies(1): >>44574022 #
6. ◴[] No.44570780[source]
7. colinmorelli ◴[] No.44570844[source]
It is absolutely possible for the unit economics of a product to be profitable and for the parent company to be losing money. In fact, it's extremely common when the company is bullish on their own future and thus they invest heavily in marketing and R&D to continue their growth. This is what I understood GP to mean.

Whether it's true for any of the mainstream LLM companies or not is anyone's guess, since their financials are either private or don't separate out LLM inference as a line item.

8. lordnacho ◴[] No.44570853[source]
Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

replies(5): >>44570918 #>>44570925 #>>44570962 #>>44571742 #>>44572421 #
9. dvfjsdhgfv ◴[] No.44570896[source]
> If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their user base you think they don't have a successful business model?

Actually, I'd be very curious to know this. Because we already have a few relatively capable models that I can run on my MBP with 128 GB of RAM (and a few less capable models I can run much faster on my 5090).

In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.

But the cynic in me feels they prefer to avoid this reality check and use the tried and tested Uber model of permanent money influx with the "profitability is just around the corner" justification but at an even bigger scale.

replies(1): >>44570940 #
10. dvfjsdhgfv ◴[] No.44570918[source]
For me, if Anthropic stopped now, and given access to all alternative models, they still would be worth exactly $240 which is the amount I'm paying now. I guess Anthropic and OpenAI can see the real demand by clearly seeing what are their free:basic:expensive plan ratios.
replies(1): >>44574494 #
11. dvfjsdhgfv ◴[] No.44570924[source]
> most people understand “AI” and “chatbot” to be synonyms

This might be true (or not), but for sure not on this site.

replies(1): >>44571283 #
12. apwell23 ◴[] No.44570925[source]
> Well worth 4 figures a year IMO

only because software engineering pay hasn't adjusted down for the new reality . You don't know what its worth yet.

replies(2): >>44571084 #>>44574128 #
13. ghc ◴[] No.44570940[source]
> In order to break even they would have to minimize the operating costs (by throttling, maiming models etc.) and/or increase prices. This would be the reality check.

Is that true? Are they operating inference at a loss or are they incurring losses entirely on R&D? I guess we'll probably never know, but I wouldn't take as a given that inference is operating at a loss.

I found this: https://semianalysis.com/2023/02/09/the-inference-cost-of-se...

which estimates that it costs $250M/year to operate ChatGPT. If even remotely true $10B in revenue on $250M of COGS would be a great business.

replies(1): >>44571028 #
14. jsnell ◴[] No.44570962[source]
They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

replies(5): >>44571061 #>>44571136 #>>44572280 #>>44572443 #>>44573390 #
15. dbalatero ◴[] No.44570964[source]
They might generate 10b ARR, but they lose a lot more than that. Their paid users are a fraction of the free riders.

https://www.wheresyoured.at/openai-is-a-systemic-risk-to-the...

replies(3): >>44571830 #>>44572286 #>>44573506 #
16. ehutch79 ◴[] No.44571007[source]
Revenue is _NOT_ Profit
replies(2): >>44571163 #>>44571412 #
17. dvfjsdhgfv ◴[] No.44571028{3}[source]
As you say, we will never know, but this article[0] claims:

> The cost of the compute to train models alone ($3 billion) obliterates the entirety of its subscription revenue, and the compute from running models ($2 billion) takes the rest, and then some. It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs.

[0] https://www.lesswrong.com/posts/CCQsQnCMWhJcCFY9x/openai-los...

replies(2): >>44571100 #>>44571236 #
18. bugbuddy ◴[] No.44571061{3}[source]
I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.
replies(9): >>44571134 #>>44571182 #>>44571264 #>>44571269 #>>44572071 #>>44572254 #>>44572375 #>>44572688 #>>44573270 #
19. fkyoureadthedoc ◴[] No.44571084{3}[source]
Can you explain this in more detail? The idiot bottom rate contractors that come through my team on the regular have not been helped at all by LLMs. The competent people do get a productivity boost though.

The only way I see compensation "adjusting" because of LLMs would need them to become significantly more competent and autonomous.

replies(2): >>44571579 #>>44573279 #
20. ghc ◴[] No.44571100{4}[source]
Obviously you don't need to train new models to operate existing ones.

I think I trust the semianalysis estimate ($250M) more than this estimate ($2B), but who knows? I do see my revenue estimate was for this year, though. However, $4B revenue on $250M COGS...is still staggeringly good. No wonder amazon, google, and Microsoft are tripping over themselves to offer these models for a fee.

replies(3): >>44571326 #>>44572365 #>>44575101 #
21. LtWorf ◴[] No.44571134{4}[source]
According to fb's aggressively targeted marketing, you sell them donald trump propaganda.
replies(1): >>44571275 #
22. lxgr ◴[] No.44571136{3}[source]
Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.
replies(4): >>44571218 #>>44571487 #>>44572061 #>>44572225 #
23. throwawayoldie ◴[] No.44571163[source]
And ARR is not revenue. It's "annualized recurring revenue": take one month's worth of revenue, multiply it by 12--and you get to pick which month makes the figures look most impressive.
replies(4): >>44571287 #>>44571311 #>>44571351 #>>44572679 #
24. cuchoi ◴[] No.44571182{4}[source]
I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards.

A quick search shows that click on ads targeting developers are expensive.

Also there is a ton of users asking to rewrite emails, create business plans, translate, etc.

25. nacnud ◴[] No.44571218{4}[source]
True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.
replies(2): >>44571708 #>>44572379 #
26. matwood ◴[] No.44571236{4}[source]
CapEx vs. OpEx.

If they stop training today what happens? Does training always have to be at these same levels or will it level off? Is training fixed? IE, you can add 10x the subs and training costs stay static.

IMO, there is a great business in there, but the market will likely shrink to ~2 players. ChatGPT has a huge lead and is already Kleenex/Google of the LLMs. I think the battle is really for second place and that is likely dictated by who runs out of runway first. I would say that Google has the inside track, but they are so bad at product they may fumble. Makes me wonder sometimes how Google ever became a product and verb.

replies(1): >>44572608 #
27. disgruntledphd2 ◴[] No.44571264{4}[source]
You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc.

You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing).

28. Lewton ◴[] No.44571269{4}[source]
> I heard majority of the users are techies asking coding questions.

Citation needed? I can't sit on a bus without spotting some young person using ChatGPT

29. disgruntledphd2 ◴[] No.44571275{5}[source]
It's very important to note that advertisers set the parameters in which FB/Google's algorithms and systems operate. If you're 25-55 in a red state, it seems likely that you'll see a bunch of that information (even if FB are well aware you won't click).
replies(1): >>44573190 #
30. bbor ◴[] No.44571283{3}[source]
I mean...

  LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them,
The only way one could say such a thing is if they think chatbots are the only real application.
31. UK-Al05 ◴[] No.44571287{3}[source]
That's still not profit.
replies(1): >>44571443 #
32. jdiff ◴[] No.44571311{3}[source]
Astonishing that that concept survived getting laughed out of the room long enough to actually become established as a term and an acronym.
replies(3): >>44571567 #>>44571756 #>>44572575 #
33. hamburga ◴[] No.44571326{5}[source]
But assuming no new models are trained, this competitive effect drives down the profit margin on the current SOTA models to zero.
replies(1): >>44571604 #
34. airstrike ◴[] No.44571351{3}[source]
You don't get to pick the month. At least not with any half-serious audience.
replies(2): >>44571452 #>>44572405 #
35. vuggamie ◴[] No.44571412[source]
It's a good point. Any business can get revenue by selling Twenty dollar bills for $19. But in the history of tech, many winners have been dismissed for lack of an apparent business model. Amazon went years losing money, and when the business stabilized, went years re-investing and never showed a profit. Analysts complained as Amazon expanded into non-retail activities. And then there's Uber.

The money is there. Investors believe this is the next big thing, and is a once in a lifetime opportunity. Bigger than the social media boom which made a bunch of billionaires, bigger than the dot com boom, bigger maybe than the invention of the microchip itself.

It's going to be years before any of these companies care about profit. Ad revenue is unlikely to fund the engineering and research they need. So the only question is, does the investor money dry up? I don't think so. Investor money will be chasing AGI until we get it or there's another AI winter.

36. throwawayoldie ◴[] No.44571443{4}[source]
I know. It's a doubly-dubious figure.
37. throwawayoldie ◴[] No.44571452{4}[source]
We're not talking about a half-serious audience: we're talking about the collection of reposters of press releases we call "the media".
38. Analemma_ ◴[] No.44571487{4}[source]
Like that’s ever stopped the adtech industry before.

It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.

39. 827a ◴[] No.44571541[source]
One thing we're seeing in the software engineering agent space right now is how many people are angry with Cursor [1], and now Claude Code [2] (just picked a couple examples; you can browse around these subreddits and see tons of complaints).

What's happening here is pretty clear to me: Its a form of enshittification. These companies are struggling to find a price point that supports both broad market adoption ($20? $30?) and the intelligence/scale to deliver good results ($200? $300?). So, they're nerfing cheap plans, prioritizing expensive ones, and pissing off customers in the process. Cursor even had to apologize for it [3].

There's a broad sense in the LLM industry right now that if we can't get to "it" (AGI, etc) by the end of this decade, it won't happen during this "AI Summer". The reason for that is two-fold: Intelligence scaling is logarithmic w.r.t compute. We simply cannot scale compute quick enough. And, interest in funding to pay for that exponential compute need will dry up, and previous super-cycles tell us that will happen on the order of ~5 years.

So here's my thesis: We have a deadline that even evangelists agree is a deadline. I would argue that we're further along in this supercycle than many people realize, because these companies have already reached the early enshitification phase for some niche use-cases (software development). We're also seeing Grok 4 Heavy release with a 50% price increase ($300/mo) yet offer single-digit percent improvement in capability. This is hallmark enshitification.

Enshitification is the final, terminal phase of hyperscale technology companies. Companies remain in that phase potentially forever, but its not a phase where significant research, innovation, and optimization can happen; instead, it is a phase of extraction. AI hyperscalers genuinely speedran this cycle thanks to their incredible funding and costs; but they're now showcasing very early signals of enshitifications.

(Google might actually escape this enshitification supercycle, to be clear, and that's why I'm so bullish on them and them alone. Their deep, multi-decade investment into TPUs, Cloud Infra, and high margin product deployments of AI might help them escape it).

[1] https://www.reddit.com/r/cursor/comments/1m0i6o3/cursor_qual...

[2] https://www.reddit.com/r/ClaudeAI/comments/1lzuy0j/claude_co...

[3] https://techcrunch.com/2025/07/07/cursor-apologizes-for-uncl...

40. eddythompson80 ◴[] No.44571567{4}[source]
It’s a KPI just like any KPI and it’s gamed. A lot of random financial metrics are like that. They were invented or coined as a short hand for something.

Different investors use different ratios and numbers (ARR, P/E, EV/EBITDA, etc) as a quick initial smoke screen. They mean different things in different industries during different times of a business’ lifecycle. BUT they are supposed to help you get a starting point to reduce noise. Not as a the 1 metric you base your investing strategy on.

replies(1): >>44572237 #
41. lelanthran ◴[] No.44571579{4}[source]
> Can you explain this in more detail?

Not sure what GP meant specifically, but to me, if $200/m gets you a decent programmer, then $200/m is the new going rate for a programmer.

Sure, now it's all fun and games as the market hasn't adjusted yet, but if it really is true that for $200/m you can 10x your revenue, it's still only going to be true until the market adjusts!

> The competent people do get a productivity boost though.

And they are not likely to remain competent if they are all doing 80% review, 15% prompting and 5% coding. If they keep the ratios at, for example, 25% review, 5% prompting and the rest coding, then sure, they'll remain productive.

OTOH, the pipeline for juniors now seems to be irrevocably broken: the only way forward is to improve the LLM coding capabilities to the point that, when the current crop of knowledgeable people have retired, programmers are not required.

Otherwise, when the current crop of coders who have the experience retires, there'll be no experience in the pipeline to take their place.

If the new norm is "$200/m gets you a programmer", then that is exactly the labour rate for programming: $200/m. These were previously (at least) $5k/m jobs. They are now $200/m jobs.

replies(2): >>44571897 #>>44573056 #
42. ghc ◴[] No.44571604{6}[source]
Even if the profit margin is driven to zero, that does not mean competitors will cease to offer the models. It just means the models will be bundled with other services. Case in point: Subversion & Git drove VCS margin to zero (remember BitKeeper?), but Bitbucket and Github wound up becoming good businesses. I think Claude Code might be the start of how companies evolve here.
43. dkdbejwi383 ◴[] No.44571655[source]
How many of those MAUs are crappy startups building a janky layer on top of the OpenAI API which will cease to exist in 2 years?
replies(1): >>44575036 #
44. calvinmorrison ◴[] No.44571708{5}[source]
google did it. LLms are the new google search. It'll happen sooner or later.
replies(1): >>44572195 #
45. vikramkr ◴[] No.44571742[source]
That's calculating value against not having LLMs and current competitors. If they stopped improving but their competitors didn't, then the question would be the incremental cost of Claude (financial, adjusted for switching costs, etc) against the incremental advantage against the next best competitor that did continue improving. Lock in is going to be hard to accomplish around a product that has success defined by its generalizability and adaptability.

Basically, they can stop investing in research either when 1) the tech matures and everyone is out of ideas or 2) they have monopoly power from either market power or oracle style enterprise lock in or something. Otherwise they'll fall behind and you won't have any reason to pay for it anymore. Fun thing about "perfect" competition is that everyone competes their profits to zero

46. singron ◴[] No.44571756{4}[source]
So the "multiply by 12" thing is a slight corruption of ARR, which should be based on recurring revenue (i.e. subscriptions). Subscriptions are harder to game by e.g. channel-stuffing and should be much more stable than non-recurring revenue.

To steelman the original concept, annual revenue isn't a great measure for a young fast-growing company since you are averaging all the months of the last year, many of which aren't indicative of the trajectory of the company. E.g. if a company only had revenue the last 3 months, annual revenue is a bad measure. So you use MRR to get a better notion of instantaneous revenue, but you need to annualize it to make it a useful comparison (e.g. to compute a P/E ratio), so you use ARR.

Private investors will of course demand more detailed numbers like churn and an exact breakdown of "recurring" revenue. The real issue is that these aren't public companies, and so they have no obligation to report anything to the public, and their PR team carefully selects a couple nice sounding numbers.

47. Cthulhu_ ◴[] No.44571830[source]
That's fixable, a gradual adjusting of the free tier will happen soon enough once they stop pumping money into it. Part of this is also a war of attrition though, who has the most money to keep a free tier the longest and attract the most people. Very familiar strategy for companies trying to gain market share.
replies(4): >>44572182 #>>44572199 #>>44572277 #>>44572372 #
48. fkyoureadthedoc ◴[] No.44571897{5}[source]
$200 does not get you a decent programmer though. It needs constant prompting, babysitting, feedback, iteration. It's just a tool. It massively boosts productivity in many cases, yes. But it doesn't do your job for you. And I'm very bullish on LLM assisted coding when compared to most of HN.

High level languages also massively boosted productivity, but we didn't see salaries collapse from that.

> And they are not likely to remain competent if they are all doing 80% review, 15% prompting and 5% coding.

I've been doing 80% review and design for years, it's called not being a mid or junior level developer.

> OTOH, the pipeline for juniors now seems to be irrevocably broken

I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

replies(3): >>44572544 #>>44572766 #>>44574269 #
49. evilfred ◴[] No.44572061{4}[source]
how is it "trusted" when it just makes things up
replies(3): >>44572191 #>>44572215 #>>44572641 #
50. jsnell ◴[] No.44572071{4}[source]
OpenAI has half a billion active users.

You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business.

By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine.

Chatbots should have exactly the same dynamics as search engines.

51. sc68cal ◴[] No.44572182{3}[source]
That assumes that everyone is willing to pay for it. I don't think that's an assumption that will be true.
replies(3): >>44572633 #>>44572986 #>>44573012 #
52. andrewflnr ◴[] No.44572191{5}[source]
That's a great question to ask the people who seem to trust them implicitly.
replies(1): >>44572373 #
53. ptero ◴[] No.44572195{6}[source]
Yes, but for a while google was head and shoulders above the competition. It also poured a ton of money into building non-search functionality (email, maps, etc.). And had a highly visible and, for a while, internally respected "don't be evil" corporate motto.

All of which made it much less likely that users would bolt in response to each real monetization step. This is very different to the current situation, where we have a shifting landscape with several AI companies, each with its strengths. Things can change, but it takes time for 1-2 leaders to consolidate and for the competition to die off. My 2c.

54. gmerc ◴[] No.44572199{3}[source]
Competition is almost guaranteed to drive price close to cost of delivery especially if they can't pay trump to ban open source, particularly chinese. With no ability to play the thiel monopoly playbook, their investors would never make their money back if not for government capture and sweet sweet taxpayer military contracts.
replies(1): >>44573094 #
55. dingnuts ◴[] No.44572215{5}[source]
15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it

you think those people don't believe the magic computer when it talks?

56. ModernMech ◴[] No.44572225{4}[source]
I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.
replies(2): >>44573078 #>>44573705 #
57. jdiff ◴[] No.44572237{5}[source]
I understand the importance of having data, and that any measurement can be gamed, but this one seems so tailored for tailoring that I struggle to understand how it was ever a good metric.

Even being generous it seems like it'd be too noisy to even assist in informing a good decision. Don't the overwhelmingly vast majority of businesses see periodic ebbs and flows over the course of a year?

replies(1): >>44577153 #
58. yamazakiwi ◴[] No.44572254{4}[source]
A lot of people use it for cooking and other categories as well.

Techies are also great for network growth and verification for other users, and act as community managers indirectly.

59. kelseyfrog ◴[] No.44572277{3}[source]
Absolutely, free-tier AI won’t stay "free" forever. It’s only a matter of time before advertisers start paying to have their products woven into your AI conversations. It’ll creep in quietly—maybe a helpful brand suggestion, a recommended product "just for you," or a well-timed promo in a tangential conversation. Soon enough though, you’ll wonder if your LLM genuinely likes that brand of shoes, or if it's just doing its job.

But hey, why not get ahead of the curve? With BrightlyAI™, you get powerful conversational intelligence - always on, always free. Whether you're searching for new gear, planning your next trip, or just craving dinner ideas, BrightlyAI™ brings you personalized suggestions from our curated partners—so you save time, money, and effort.

Enjoy smarter conversations, seamless offers, and a world of possibilities—powered by BrightlyAI™: "Illuminate your day. Conversation, curated."

60. naravara ◴[] No.44572280{3}[source]
If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)

Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.

61. Centigonal ◴[] No.44572286[source]
This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s. LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token. Plus, the question at hand is "will LLMs be relevant?" and not "will LLMs be massively profitable to model providers?"
replies(12): >>44572513 #>>44572558 #>>44572586 #>>44572813 #>>44573104 #>>44573394 #>>44573558 #>>44573961 #>>44575180 #>>44575826 #>>44577467 #>>44577474 #
62. singron ◴[] No.44572365{5}[source]
You need to train new models to advance the knowledge cutoff. You don't necessarily need to R&D new architectures, and maybe you can infuse a model with new knowledge without completely training from scratch, but if you do nothing the model will become obsolete.

Also the semianalysis estimate is from Feb 2023, which is before the release of gpt4, and it assumes 13 million DAU. ChatGPT has 800 million WAU, so that's somewhere between 115 million and 800 million DAU. E.g. if we prorate the cogs estimate for 200 DAU, then that's 15x higher or $3.75B.

replies(1): >>44573670 #
63. SJC_Hacker ◴[] No.44572372{3}[source]
I agree, its easily fixable by injecting ads into the responses for the free tier and probably eventually even the lower paid tiers to some extent
replies(1): >>44572524 #
64. handfuloflight ◴[] No.44572373{6}[source]
They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources.
replies(3): >>44572554 #>>44572818 #>>44574260 #
65. naravara ◴[] No.44572375{4}[source]
The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era.
66. SJC_Hacker ◴[] No.44572379{5}[source]
There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear
67. SJC_Hacker ◴[] No.44572405{4}[source]
> At least not with any half-serious audience.

So I guess this rules out most SV venture capital

68. miki123211 ◴[] No.44572421[source]
But if Claude stopped pouring their money into research and others didn't, Claude wouldn't be useful a year from now, as you could get a better model for the same price.

This is why AI companies must lose money short term. The moment improvements plateau or the economic environment changes, everyone will cut back on research.

69. miki123211 ◴[] No.44572443{3}[source]
and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.

Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.

70. amrocha ◴[] No.44572513{3}[source]
The point is that if they’re not profitable they won’t be relevant since they’re so expensive to run.

And there was never any question as to how social media would make money, everyone knew it would be ads. LLMs can’t do ads without compromising the product.

replies(9): >>44572606 #>>44572617 #>>44572620 #>>44572951 #>>44573061 #>>44573125 #>>44575104 #>>44575452 #>>44576838 #
71. amrocha ◴[] No.44572524{4}[source]
Literally nobody would talk to a robot that spits back ads at them
replies(4): >>44572645 #>>44573292 #>>44574101 #>>44575780 #
72. handfuloflight ◴[] No.44572544{6}[source]
> It needs constant prompting, babysitting, feedback, iteration.

What do you think a product manager is doing?

replies(1): >>44572886 #
73. andrewflnr ◴[] No.44572554{7}[source]
Nope, lots of idiots just take them at face value. You're still describing what rational people do, not what all actual people do.
replies(1): >>44572590 #
74. ◴[] No.44572558{3}[source]
75. marcosdumay ◴[] No.44572575{4}[source]
Just wait until companies start calculating it on future revenue from people on the trial period of subscriptions... I mean, if we aren't there already.

Any number that there isn't a law telling companies how to calculate it will always be a joke.

76. magicalist ◴[] No.44572586{3}[source]
> LLMs might shake out differently from the social web, but I don't think that speculating about the flexibility of demand curves is a particularly useful exercise in an industry where the marginal cost of inference capacity is measured in microcents per token

That we might come to companies saying "it's not worth continuing research or training new models" seems to reinforce the OP's point, not contradict it.

replies(1): >>44572756 #
77. handfuloflight ◴[] No.44572590{8}[source]
Fair enough.
78. tsukikage ◴[] No.44572606{4}[source]
You’re not thinking evil enough. LLMs have the potential to be much more insidious about whatever it is they are shilling. Our dystopian future will feature plausibly deniable priming.
79. marcosdumay ◴[] No.44572608{5}[source]
That paragraph is quite clear.

OpEx is larger than revenue. CapEx is also larger than the total revenue on the lifetime of a model.

80. kridsdale3 ◴[] No.44572617{4}[source]
Well, they haven't really tried yet.

The Meta app Threads had no ads for the first year, and it was wonderful. Now it does, and its attractiveness was only reduced by 1% at most. Meta is really good at knowing the balance for how much to degrade UX by having monetization. And the amount they put in is hyper profitable.

So let's see Gemini and GPT with 1% of response content being sponsored. I doubt we'll see a user exodus and if that's enough to sustain the business, we're all good.

81. Centigonal ◴[] No.44572620{4}[source]
I can run an LLM on my RTX3090 that is at least as useful to me in my daily life as an AAA game that would otherwise justify the cost of the hardware. This is today, which I suspect is in the upper part of the Kuznets curve for AI inference tech. I don't see a future where LLMs are too expensive to run (at least for some subset of valuable use cases) as likely.
replies(1): >>44573316 #
82. mike-cardwell ◴[] No.44572633{4}[source]
Those that aren't willing to pay for it directly, can still use it for free, but will just have to tolerate product placement.
83. tsukikage ◴[] No.44572641{5}[source]
“trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.
replies(2): >>44573318 #>>44573568 #
84. kridsdale3 ◴[] No.44572645{5}[source]
Hundreds of millions of people watch TV and listen to Radio that is at least 30% ad content per hour.
85. hobofan ◴[] No.44572679{3}[source]
ARR traditionally is _annual_ recurring revenue. The notion that it may be interpreted as _annualized_ and extrapolatable from MRR is a very recent development, and I doubt that most people interpret it as that.
replies(1): >>44573459 #
86. tsukikage ◴[] No.44572688{4}[source]
…for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers.
87. Centigonal ◴[] No.44572756{4}[source]
The point I'm making is that, even in the extreme case where we cease all additional R&D on LLMs, what has been developed up until now has a great deal of utility and transformative power, and that utility can be delivered at scale for cheap. So, even if LLMs don't become an economic boon for the companies that enable them, the transformative effect they have and will continue to have on society is inevitable.

Edit: I believe that "LLMs transforming society is inevitable" is a much more defensible assertion than any assertion about the nature of that transformation and the resulting economic winners and losers.

replies(1): >>44574055 #
88. lelanthran ◴[] No.44572766{6}[source]
> It needs constant prompting, babysitting, feedback, iteration. It's just a tool. It massively boosts productivity in many cases, yes.

It doesn't sound like you are disagreeing with me: that role you described is one of manager, not of programmer.

> High level languages also massively boosted productivity, but we didn't see salaries collapse from that.

Those high level languages still needed actual programmers. If the LLM is able to 10x the output of a single programmer because that programmer is spending all their time managing, you don't really need a programmer anymore, do you?

> I've been doing 80% review and design for years, it's called not being a mid or junior level developer.

Maybe it differs from place to place. I was a senior and a staff engineer, at various places including a FAANG. My observations were that even staff engineer level was still spending around 2 - 3 hours a day writing code. If you're 10x'ing your productivity, you almost certainly aren't spending 2 - 3 hours a day writing code.

> I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

This is a bit of a non-sequitor; what does that have to do with breaking the pipeline for actual juniors?

Without juniors, we don't get seniors. Without seniors and above, who will double-check the output of the LLM?[1]

If no one is hiring juniors anymore, then the pipeline is broken. And since the market price of a programmer is going to be set at $200/m, where will you find new entrants for this market?

Hell, even mid-level programmers will exit, because when a 10-programmer team can be replaced by a 1-person manager and a $200/m coding agent, those 9 people aren't quietly going to starve while the industry needs them again. They're going to go off and find something else to do, and their skills will atrophy (just like the 1-person LLM manager skills will atrophy eventually as well).

----------------------------

[1] Recall that my first post in this thread was to say that the LLM coding agents have to get so good that programmers aren't needed anymore because we won't have programmers anymore. If they aren't that good when the current crop starts retiring then we're in for some trouble, aren't we?

replies(1): >>44573132 #
89. overfeed ◴[] No.44572813{3}[source]
> This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?"

The answer was, and will be ads (talk about inevitability!)

Can you imagine how miserable interacting with ad-funded models will be? Not just because of the ads they spew, but also the penny-pinching on training and inference budgets, with an eye focused solely on profitability. That is what the the future holds: consolidations, little competition, and models that do the bare-minimum, trained and operated by profit-maximizing misers, and not the unlimited intelligence AGI dream they sell.

replies(2): >>44573103 #>>44573707 #
90. sheiyei ◴[] No.44572818{7}[source]
> they're trusted to accurately represent the sources.

Which is still too much trust

91. fkyoureadthedoc ◴[] No.44572886{7}[source]
Not writing and committing code with GitHub Copilot, I'll tell you that. These things need to come a _long_ way before that's a reality.
92. owlninja ◴[] No.44572951{4}[source]
I was chatting with Gemini about vacation ideas and could absolutely picture a world where if it lists some hotels I might like, the businesses that bought some LLM ad space could easily show up more often than others.
93. ebiester ◴[] No.44572986{4}[source]
Consider the general research - in all, it doesn't eliminate people, but let's say it shakes out to speeding up developers 10% over all tasks. (That includes creating tickets, writing documentation, unblocking bugs, writing scripts, building proof of concepts, and more rote refactoring, but does not solve the harder problems or stop us from doing the hard work of software engineering that doesn't involve lines of code.)

That means that it's worth up to 10% of a developer's salary as a tool. And more importantly, smaller teams go faster, so it might be worth that full 10%.

Now, assume other domains end up similar - some less, some more. So, that's a large TAM.

94. LordDragonfang ◴[] No.44573012{4}[source]
It very much does not assume that, only that some fraction will have become accustomed to using it to the point of not giving it up. In fact, they could probably remain profitable without a single new customer, given the number of subscribers they already have.
95. sheiyei ◴[] No.44573056{5}[source]
Your argument requires "Claude can replace a programme" to be true. Thus, your argument is false for the foreseeable future.
96. ◴[] No.44573061{4}[source]
97. thewebguyd ◴[] No.44573078{5}[source]
I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope.
98. xedrac ◴[] No.44573094{4}[source]
> especially if they can't pay trump to ban open source?

Huh? Do you mean for official government use?

99. 6510 ◴[] No.44573103{4}[source]
I see a real window this time to sell your soul.
100. roughly ◴[] No.44573104{3}[source]
Social networks finding profitability via advertising is what created the entire problem space of social media - the algorithmic timelines, the gaming, the dopamine circus, the depression, everything negative that’s come from social media has come from the revenue model, so yes, I think it’s worth being concerned about how LLMs make money, not because I’m worried they won’t, because I’m worried they Will.
replies(3): >>44573381 #>>44575502 #>>44577204 #
101. overfeed ◴[] No.44573125{4}[source]
> LLMs can’t do ads without compromising the product.

Spoiler: they are still going to do ads, their hand will be forced.

Sooner or later, investors are going to demand returns on the massive investments, and turn off the money faucet. There'll be consolidation, wind-downs and ads everywhere.

102. fkyoureadthedoc ◴[] No.44573132{7}[source]
> And since the market price of a programmer is going to be set at $200/m

You keep saying this, but I don't see it. The current tools just can't replace developers. They can't even be used in the same way you'd use a junior developer or intern. It's more akin to going from hand tools to power tools than it is getting an apprentice. The job has not been automated and hasn't been outsourced to LLMs.

Will it be? Who knows, but in my personal opinion, it's not looking like it will any time soon. There would need to be more improvement than we've seen from day 1 of ChatGPT until now before we could even be seriously considering this.

> Those high level languages still needed actual programmers.

So does the LLM from day one until now, and for the foreseeable future.

> This is a bit of a non-sequitor; what does that have to do with breaking the pipeline for actual juniors?

Who says the pipeline is even broken by LLMs? The job market went to shit with rising interest rates before LLMs hit the scene. Nobody was hiring them anyway.

replies(1): >>44578167 #
103. LtWorf ◴[] No.44573190{6}[source]
I'm not even in USA and I've never been in USA in my entire life.
replies(1): >>44580145 #
104. JackFr ◴[] No.44573270{4}[source]
You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code.

Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay.

105. cgh ◴[] No.44573279{4}[source]
There's another specific class of person that seems helped by them: the paralysis by analysis programmer. I work with someone really smart who simply cannot get started when given ordinary coding tasks. She researches, reads and understands the problem inside and out but cannot start actually writing code. LLMs have pushed her past this paralysis problem and given her the inertia to continue.

On the other end, I know a guy who writes deeply proprietary embedded code that lives in EV battery controllers and he's found LLMs useless.

106. gomox ◴[] No.44573292{5}[source]
I predict this comment to enter the Dropbox/iPod hall of shame of discussion forum skeptics.
107. TeMPOraL ◴[] No.44573316{5}[source]
I don't even get where this argument comes from. Pretraining is expensive, yes, but both LoRAs in diffusion models and finetunes of transformers show us that this is not the be-all, end-all; there's plenty of work being done on extensively tuning base models for cheap.

But inference? Inference is dirt cheap and keeps getting cheaper. You can run models lagging 6-12 years on consumer hardware, and by this I don't mean absolutely top-shelf specs, but more of "oh cool, turns out the {upper-range gaming GPU/Apple Silicon machine} I bought a year ago is actually great at running local {image generation/LLM inference}!" level. This is not to say you'll be able to run o3 or Opus 4 on a laptop next year - larger and more powerful models obviously require more hardware resources. But this should anchor expectations a bit.

We're measuring inference costs in multiples of gaming GPUs, so it's not an impending ecological disaster as some would like the world to believe - especially after accounting for data centers being significantly more efficient at this, with specialized hardware, near-100% utilization, countless of optimization hacks (including some underhanded ones).

108. pegasus ◴[] No.44573318{6}[source]
For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS.
109. milesvp ◴[] No.44573381{4}[source]
I think this can't be understated. It also destroyed search. I listened to a podcast a few years ago with an early googler who talked about this very precipice in early google days. They did a lot of testing, and a lot of modeling of people's valuation of search. They figured that the average person got something like $50/yr of value out of search (I can't remember the exact number, I hope I'm not off by an order of magnitude). And that was the most they could ever realistically charge. Meanwhile, advertising for just Q4 was like 10 times the value. It meant that they knew that advertising on the platform was inevitable. They also acknowledged that it would lead to the very problem that Brin and Page wrote about in their seminal paper on search.

I see LLMs inevitably leading to the same place. There will undoubtedly be advertising baked into the models. It is too strong a financial incentive. I can only hope that an open source alternative will at least allow for a hobbled version to consume.

edit: I think this was the podcast https://freakonomics.com/podcast/is-google-getting-worse/

replies(1): >>44575943 #
110. immibis ◴[] No.44573390{3}[source]
Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!
replies(1): >>44574196 #
111. ysavir ◴[] No.44573394{3}[source]
The thing about facebook/twitter/etc was that everyone knew how they achieve lock-in and build a moat (network effect), but the question was around where to source revenue.

With LLMs, we know what the revenue source is (subscription prices and ads), but the question is about the lock-in. Once each of the AI companies stops building new iterations and just offers a consistent product, how long until someone else builds the same product but charges less for it?

What people often miss is that building the LLM is actually the easy part. The hard part is getting sufficient data on which to train the LLM, which is why most companies just put ethics aside and steal and pirate as much as they can before any regulations cuts them off (if any regulations ever even do). But that same approach means that anyone else can build an LLM and train on that data, and pricing becomes a race to the bottom, if open source models don't cut them out completely.

replies(1): >>44574537 #
112. throwawayoldie ◴[] No.44573459{4}[source]
What does it tell you then, that the interpretation of "A" as "annualized" is the interpretation Anthropic, to name one, has chosen?
113. jahewson ◴[] No.44573506[source]
Then cut off the free riders. Problem solved overnight.
114. Wowfunhappy ◴[] No.44573558{3}[source]
> This echoes a lot of the rhetoric around "but how will facebook/twitter/etc make money?" back in the mid 2000s.

The difference is that Facebook costs virtually nothing to run, at least on a per-user basis. (Sure, if you have a billion users, all of those individual rounding errors still add up somewhat.)

By contrast, if you're spending lots of money per user... well look at what happened to MoviePass!

The counterexample here might be Youtube; when it launched, streaming video was really expensive! It still is expensive too, but clearly Google has figured out the economics.

replies(1): >>44574284 #
115. lxgr ◴[] No.44573568{6}[source]
I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw).

Many people have a lot of trust in anything ChatGPT tells them.

116. ghc ◴[] No.44573670{6}[source]
> You need to train new models to advance the knowledge cutoff

That's a great point, but I think it's less important now with MCP and RAG. If VC money dried up and the bubble burst, we'd still have broadly useful models that wouldn't be obsolete for years. Releasing a new model every year might be a lot cheaper if a company converts GPU opex to capex and accepts a long training time.

> Also the semianalysis estimate is from Feb 2023,

Oh! I missed the date. You're right, that's a lot more expensive. On the other hand, inference has likely gotten a lot cheaper (in terms of GPU TOPS) too. Still, I think there's a profitable business model there if VC funding dries up and most of the model companies collapse.

117. lxgr ◴[] No.44573705{5}[source]
Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term.

For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well.

118. signatoremo ◴[] No.44573707{4}[source]
It won’t be ads. Social media target consumers, so advertising is dominant. We all love free services and don’t mind some attraction.

AI on the other hand target businesses and consumers alike. A bank using LLM won’t get ads. Using LLM will be cost of doing business. Do you know what they means to consumers? Price for ChatGPT will go down.

replies(2): >>44574025 #>>44579330 #
119. johnnyanmac ◴[] No.44573961{3}[source]
Well, given the answers to the former: maybe we should stop now before we end up selling even more of our data off to technocrats. Or worse, your chatbot shilling to you between prompts.

And yes these are still businesses. If they can't find profitability they will drop it like it's hot. i.e. we hit another bubble burst that tech is known to do every decade or 2. There's no free money anymore to carry them anymore, so perfect time to burst.

120. thewebguyd ◴[] No.44574022[source]
> I also imagine it’s hard to go to investors with “while all of our competitors are improving their models and either closing the gap or surpassing us, we’re just going to stabilize and see if people will pay for our current product.”

Yeah, no one wants to be the first to stop improving models. As long as investor money keeps flowing in there's no reason to - just keep burning it and try to outlast your competitors, figure out the business model later. We'll only start to see heavy monetization once the money dries up, if it ever does.

replies(1): >>44574405 #
121. johnnyanmac ◴[] No.44574025{5}[source]
>AI on the other hand target businesses and consumers alike.

Okay. So AI will be using ads for consumers and make deals with the billionaires. If window 11/12 still puts ads in what is a paid premium product, I see no optimism in thinking that a "free" chatbot will not also resort to it. Not as long as the people up top only see dollar signs and not long term longevity.

>Price for ChatGPT will go down.

Price for ChatGPT in reality, is going up in the meanwhile. This is like hoping grocery prices come down as inflation lessens. This never happens, you can only hope to be compensated more to make up for inflation.

replies(1): >>44576560 #
122. johnnyanmac ◴[] No.44574055{5}[source]
>what has been developed up until now has a great deal of utility and transformative power

I think we'd be more screwed than VR if development ceased today. They are little more than toys right now who's most successsful outings are grifts, and the the most useful tools are simply aiding existing tooling (auto-correct). It is not really "intelligence" as of now.

>I believe that "LLMs transforming society is inevitable" is a much more defensible assertion

Sure. But into what? We can't just talk about change for change's sake. Look at the US in 2025 with that mentality.

123. johnnyanmac ◴[] No.44574101{5}[source]
You still have faith in society after decades of ads being spit at them.
124. johnnyanmac ◴[] No.44574128{3}[source]
I mean, it adjusted down by having some hundreds of thousands of engineers laid off in he last 2+ years. they know slashing salaries is legal suicide, so they just make the existing workers work 3x as hard.
125. ericfr11 ◴[] No.44574196{4}[source]
It sounds quite scary that an LLM could be trained on a single source of news (specially FN).
126. PebblesRox ◴[] No.44574260{7}[source]
If you believe this, people believe everything they read by default and have to apply a critical thinking filter on top of it to not believe the thing.

I know I don't have as much of a filter as I ought to!

https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/TiDGXt3WrQwt...

replies(1): >>44575741 #
127. nyarlathotep_ ◴[] No.44574269{6}[source]
> I constantly get junior developers handed to me from "strategic partners", they are just disguised as senior developers. I'm telling you brother, the LLMs aren't helping these guys do the job. I've let go 3 of them in July alone.

I find this surprising. I figured the opposite: that the quality of body shop type places would improve and the productivity increases would decrease as you went "up" the skill ladder.

I've worked on/inherited a few projects from the Big Name body shops and, frankly, I'd take some "vibe coded" LLM mess any day of the week. I really figured there was nowhere to go but "up" for those kinds of projects.

128. jsnell ◴[] No.44574284{4}[source]
You're either overestimating the cost of inference or underestimating the cost of running a service like Facebook at that scale. Meta's cost of revenue (i.e. just running the service, not R&D, not marketing, not admin, none of that) was about $30B/year in 2024. In the leaked OpenAI financials from last year, their 2024 inference costs were 1/10th of that.
replies(2): >>44575558 #>>44581093 #
129. BolexNOLA ◴[] No.44574405{3}[source]
Maybe I’m naïve/ignorant of how things are done in the VC world, but given the absolutely enormous amount of money flowing into so many AI startups right now, I can’t imagine that the gravy train is going to continue for more than a few years. Especially not if we enter any sort of economic downturn/craziness from the very inconsistent and unpredictable decisions being made by the current administration
replies(1): >>44574587 #
130. danielbln ◴[] No.44574494{3}[source]
You may want to pay for Claude Max outside of the Google or iOS ecosystem and save $40/month.
131. umpalumpaaa ◴[] No.44574537{4}[source]
ChatGPT also makes money via affiliate links. If you ask ChatGPT something like "what is the best airline approved cabin luggage you can buy?" you get affiliate links to Amazon and other sites. I use ChatGPT most of the time before I buy anything these days… From personal experience (I operated an app financed by affiliate links). I can tell you that this for sure generates a lot of money. My app was relatively tiny and I only got about 1% of the money I generated but that app pulled in about $50k per month.

Buying better things is one of my main use cases for GPT.

replies(1): >>44575812 #
132. thewebguyd ◴[] No.44574587{4}[source]
You would think so. Investors are eventually going to want a return on their money put in. But there seems to be a ton of hype and irrationality around AI, even worse than blockchain back in the day.

I think there's an element of FOMO - should someone actually get to AGI, or at least something good enough to actually impact the labor market and replace a lot of jobs, the investors of that company/product stand to make obscene amounts of money. So everyone pumps in, in hope of that far off future promise.

But like you said, how long can this keep going before it starts looking like that future promise will not be fulfilled in this lifetime and investors start wanting a return.

133. reasonableklout ◴[] No.44575036[source]
Last year, ChatGPT was 75% of OpenAI's revenue[1], not the API.

[1]: https://www.businessofapps.com/data/chatgpt-statistics/

134. dvfjsdhgfv ◴[] No.44575101{5}[source]
> Obviously you don't need to train new models to operate existing ones.

For a few months, maybe. Then they become obsolete and, in some cases like coding, useless.

135. lotsoweiners ◴[] No.44575104{4}[source]
To be fair, ads always compromise the product.
136. rpdillon ◴[] No.44575180{3}[source]
Yep. Remember when Amazon could never make money and we kept trying to explain they were reinvesting their earnings into R&D and nobody believed it? All the rhetoric went from "Amazon can't be profitable" to "Amazon is a monopoly" practically overnight. It's like people don't understand the explore/exploit strategy trade-off.
replies(1): >>44575844 #
137. swat535 ◴[] No.44575452{4}[source]
> LLMs can’t do ads without compromising the product.

It depends on what you mean by "compromise" here but they sure can inject ads.. like make the user wait 5 seconds, show an ad, then reply..

They can delay the response times and promote "premium" plans, etc

Lots of ways to monetize, I suppose the question is: will users tolerate it?

Based on what I've seen, the answer is yes, people will tolerate anything as long as it's "free".

138. socalgal2 ◴[] No.44575502{4}[source]
Social networks will have all of those effects without any effort by the platform itself because the person with more followers has more influence so the people on the platform will do all they can to get more.

I'm not excusing the platforms for bad algorithms. Rather, I believe it's naive to think that, but for the behavior of the platform itself that things would be great and rosy.

No, they won't. The fact that nearly every person in the world can mass communicate to nearly every other person in the world is the core issue. It is not platform design.

139. matthewdgreen ◴[] No.44575558{5}[source]
But their research costs are extremely high, and without a network effect that revenue is only safe until a better competitor emerges.
replies(1): >>44577903 #
140. andrewflnr ◴[] No.44575741{8}[source]
That checks out with my experience. I don't think it's just reading either. Even deeper than stranger danger, we're inclined to assume other humans communicating with us are part of our tribe, on our side, and not trying to deceive us. Deception, and our defenses against deception, are a secondary phenomenon. It's the same reason that jokes like "the word 'gullible' is written in the ceiling", gesturing to wipe your face at someone with a clean face, etc, all work by default.
141. SJC_Hacker ◴[] No.44575780{5}[source]
That's pretty much what search engines are nowadays
142. ysavir ◴[] No.44575812{5}[source]
Makes you wonder whether the affiliate links are actual, valid affiliate links or just hallucinations from affiliate links it's come across in the wild
replies(1): >>44577465 #
143. mxschumacher ◴[] No.44575826{3}[source]
what I struggle with is that the top 10 providers of LLMs all have identical* products. The services have amazing capabilities, but no real moats.

The social media applications have strong network effects, this drives a lot of their profitability.

* sure, there are differences, see the benchmarks, but from a consumer perspective, there's no meaningful differentiation

replies(1): >>44578080 #
144. mxschumacher ◴[] No.44575844{4}[source]
AWS is certainly super profitable, if the ecommerce business was standalone, would it really be such a cash-gusher?
replies(1): >>44576546 #
145. SJC_Hacker ◴[] No.44575943{5}[source]
This is an interesting take - is my "attention" really worth several thousand a year? In that my purchasing decisions being influenced by advertising to that degree that someone is literally paying someone else for my attention ...

I wonder if instead, could I sell my "attention" instead of others profitting of it?

replies(1): >>44577375 #
146. rpdillon ◴[] No.44576546{5}[source]
Amazon is successful because of the insanely broad set of investments they've made - many of them compound well in a way that supports their primary business. Amazon Music isn't successful, but it makes Kindle tablets more successful. This is in contrast to Google, which makes money on ads, and everything else is a side quest. Amazon has side quests, but also has many more initiatives that create a cohesive whole from the business side.

So while I understand how it looks from a financial perspective, I think that perspective is distorted in terms of what causes those outcomes. Many of the unprofitable aspects directly support the profitable ones. Not always, though.

147. Geezus_42 ◴[] No.44576560{6}[source]
Has any SAAS product ever reduced their subscription cost?
replies(1): >>44577885 #
148. Geezus_42 ◴[] No.44576838{4}[source]
Social and search both compromised the product for ad revenue.
149. eddythompson80 ◴[] No.44577153{6}[source]
(sorry I kept writing and didn't realize how long it got and don't have the time to summarize it better)

Here is how it sort of happens sometimes:

- You are an analyst at some hedge fund.

- You study the agriculture industry overall and understand the general macro view of the market segment and its parameters etc.

- You pick few random agriculture company (e.g: WeGrowPotatos Corp.) that did really really solid returns between 2001 and 2007 and analyze their performance.

- You try to see how you could have predicted the company's performance in 2001 based on all the random bits of data you have. You are not looking for something that makes sense per se. Investing based on metrics that make intuitive sense is extremely hard if not impossible because everyone is doing that which makes the results very unpredictable.

- You figure out that for whatever reason, if you sum the total sales for a company, subtract reserved cash, and divide that by the global inflation rate minus the current interest rate in the US; this company has a value that's an anomaly among all the other agriculture companies.

- You call that bullshit The SAGI™ ratio (Sales Adjusted for Global Inflation ratio)

- You calculate the SAGI™ ratio for other agriculture companies in different points in time and determine its actual historical performance and parameters compared to WeGrowPotatoes in 2001.

- You then calculate that SAGI™ ratio for all companies today and study the ones that match your desired number then invest in them. You might even start applying SAGI™ analysis to non-agriculture companies.

- (If you're successful) In few years you will have built a reputation. Everyone wants to learn from you how you value a company. You share your method with the world. You still investigate the business to see how much it diverges from your "WeGrowPotatoes" model you developed the SAGI ratio based on.

- People look at your returns, look at your (1) step of calculating SAGI, and proclaim that the SAGI ratio paramount. Everyone is talking about nothing but SAGI ratio. Someone creates a SAGIHeads.com and /r/SAGInation and now Google lists it under every stock for some reason.

It's all about that (sales - cash / inflation - interest). A formula that makes no sense; but people are gonna start working it backwards by trying to understand what does "sales - cash" actually mean for a company?

Like that SAGI is bullshit I just made up, but EV is an actual metric and it's generally calculated as (equity + debt - cash). What do you think that tells you about a company? and why do people look at it? How does it make any sense for a company to sum its assets and debt? what is that? According to financial folks it tells you the actual market operation size of the company. The cash a company holds is not in the market so it doesn't count. the assets are obviously important to count, but debt for a company can be positive if it's on path to convert into asset on a reasonable timeline.

I don't know why investors in the tech space focus too much on ARR. It's possible that it was a useful metric with traditional internet startups model like Google, Facebook, Twitter, Instagram, Reddit, etc where the general wisdom was it's impossible to expect people to pay a lot for online services. So generating any sort of revenue almost always correlated with how many contracts do you get to signup with advertisers or enterprises and those are usually pretty stable and lucrative.

I highly recommend listening to Warren Buffets investing Q&As or lectures. He got me to view companies and the entire economy differently.

replies(1): >>44577403 #
150. Centigonal ◴[] No.44577204{4}[source]
oh, I 100% agree with this. The way the social web was monetized is the root of a lot of evil. With AI, we have an opportunity to learn from the past. I think a lesson here is "don't wait to think critically about the societal consequences of the next Big Tech Thing's business model because you have doubts about its profitability or unit economics."
151. lymbo ◴[] No.44577375{6}[source]
Yes, but your attention rapidly loses value the more that your subsequent behavior misaligns with the buyer’s desires. In other words, the ability to target unsuspecting, idle minds far exceeds the value of a willing and conscious attention seller.
152. jdiff ◴[] No.44577403{7}[source]
No worries about the length, I appreciate you taking the time and appreciate the insight! That does help start to work ARR into a mental model that, while still not sane, is at least as understandably insane as everything else in the financial space.
153. umpalumpaaa ◴[] No.44577465{6}[source]
It clearly is a 100% custom UI logic implemented by OpenAI… They render the products in carrousels… They probably get a list of product and brand names from the LLM (for certain requests/responses) and render that in a separate UI after getting those affiliate links for those products… its not hard to do. Just slap on your affiliate ID to the links you found and you are done.
replies(1): >>44581004 #
154. scarface_74 ◴[] No.44577467{3}[source]
No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

Twitter has never been consistently profitable

155. scarface_74 ◴[] No.44577474{3}[source]
No one ever doubted that Facebook would make money. It was profitable early on, never lost that much money and was definitely profitable by the time it went public.

Twitter has never been consistently profitable.

ChatGPT also has higher marginal costs than any of the software only tech companies did previously.

156. koolba ◴[] No.44577885{7}[source]
Does S3 count as a SaaS? Or is that too low level?

How about tarsnap? https://www.daemonology.net/blog/2014-04-02-tarsnap-price-cu...

157. jsnell ◴[] No.44577903{6}[source]
You're moving the goalposts, given the original complaint was not about research costs but about the marginal cost of serving additional users...

I guess you'd be surprised to find out that Meta's R&D costs are an order of magnitude higher than OpenAI's training + research costs? ($45B in 2024, vs. about $5B for OpenAI according to the leaked financials.)

158. ogogmad ◴[] No.44578080{4}[source]
This is perfect news for consumers and terrible news for investors. Which are you?
159. bscphil ◴[] No.44578167{8}[source]
> The current tools just can't replace developers. They can't even be used in the same way you'd use a junior developer or intern. It's more akin to going from hand tools to power tools than it is getting an apprentice.

In that case it seems to depend on what you mean by "replacing", doesn't it? It doesn't mean a non-developer can do a developers job, but it does mean that one developer can do two developer's jobs. That leads to a lot more competition for the remaining jobs and presumably many competent developers will accept lower salaries in exchange for having a job at all.

160. overfeed ◴[] No.44579330{5}[source]
> Price for ChatGPT will go down.

As will the response quality, while maintaining the same product branding. Users will accept whatever response OpenAI gives them under the "4o", "6p","9x" or whatever brand of the day, even as they ship-of-Theseus the service for higher margins. I'm yet to see an AI service with QoS guarantees, or even that the model weights & infrastructure won't be "optimized" over time to the customer's disadvantage.

161. disgruntledphd2 ◴[] No.44580145{7}[source]
Wow, that's either a bug or an incredibly incompetent advertiser.
replies(1): >>44581329 #
162. ysavir ◴[] No.44581004{7}[source]
ahh, okay. I don't use the service, I didn't realize they had a dedicated UI for it. I assumed it was all just embedded in the text.
163. Wowfunhappy ◴[] No.44581093{5}[source]
You're right, I was underestimating the cost of running Facebook! $30B spent / ~3B users = ~$10 per user per year. I'd thought it would be closer to 10¢.

Do you know why it's so expensive? I'd thought serving html would be cheaper, particularly at Facebook's scale. Does the $30B include the cost of human content moderators? I also guess Facebook does a lot of video now, do you think that's it?

Also, even still, $10 per user has got to be an order of magnitude less than what OpenAI is spending on its free users, no?

164. LtWorf ◴[] No.44581329{8}[source]
Are we surprised someone working for trump is not competent?