Most active commenters
  • LtWorf(3)
  • lxgr(3)
  • disgruntledphd2(3)
  • andrewflnr(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1616 points SwoopsFromAbove | 40 comments | | HN request time: 1.427s | source | bottom
Show context
lsy ◴[] No.44568114[source]
I think two things can be true simultaneously:

1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.

2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.

replies(26): >>44568145 #>>44568416 #>>44568799 #>>44569151 #>>44569734 #>>44570520 #>>44570663 #>>44570711 #>>44570870 #>>44571050 #>>44571189 #>>44571513 #>>44571570 #>>44572142 #>>44572326 #>>44572360 #>>44572627 #>>44572898 #>>44573137 #>>44573370 #>>44573406 #>>44574774 #>>44575820 #>>44577486 #>>44577751 #>>44577911 #
alonsonic ◴[] No.44570711[source]
I'm confused with your second point. LLM companies are not making any money from current models? Openai generates 10b USD ARR and has 100M MAUs. Yes they are running at a loss right now but that's because they are racing to improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their massive user base you think they don't have a successful business model? People use this tools daily, this is inevitable.
replies(11): >>44570725 #>>44570756 #>>44570760 #>>44570772 #>>44570780 #>>44570853 #>>44570896 #>>44570964 #>>44571007 #>>44571541 #>>44571655 #
lordnacho ◴[] No.44570853[source]
Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

replies(5): >>44570918 #>>44570925 #>>44570962 #>>44571742 #>>44572421 #
1. jsnell ◴[] No.44570962[source]
They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

replies(5): >>44571061 #>>44571136 #>>44572280 #>>44572443 #>>44573390 #
2. bugbuddy ◴[] No.44571061[source]
I heard majority of the users are techies asking coding questions. What do you sell to someone asking how to fix a nested for loop in C++? I am genuinely curious. Programmers are known to be the stingiest consumers out there.
replies(9): >>44571134 #>>44571182 #>>44571264 #>>44571269 #>>44572071 #>>44572254 #>>44572375 #>>44572688 #>>44573270 #
3. LtWorf ◴[] No.44571134[source]
According to fb's aggressively targeted marketing, you sell them donald trump propaganda.
replies(1): >>44571275 #
4. lxgr ◴[] No.44571136[source]
Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.
replies(4): >>44571218 #>>44571487 #>>44572061 #>>44572225 #
5. cuchoi ◴[] No.44571182[source]
I'm not sure that stereotype holds up. Developers spend a lot: courses, cloud services, APIs, plugins, even fancy keyboards.

A quick search shows that click on ads targeting developers are expensive.

Also there is a ton of users asking to rewrite emails, create business plans, translate, etc.

6. nacnud ◴[] No.44571218[source]
True - but if you erode that trust then your users may go elsewhere. If you keep the ads visually separated, there's a respected boundary & users may accept it.
replies(2): >>44571708 #>>44572379 #
7. disgruntledphd2 ◴[] No.44571264[source]
You'd probably do brand marketing for Stripe, Datadog, Kafka, Elastic Search etc.

You could even loudly proclaim that the are ads are not targeted by users which HN would love (but really it would just be old school brand marketing).

8. Lewton ◴[] No.44571269[source]
> I heard majority of the users are techies asking coding questions.

Citation needed? I can't sit on a bus without spotting some young person using ChatGPT

9. disgruntledphd2 ◴[] No.44571275{3}[source]
It's very important to note that advertisers set the parameters in which FB/Google's algorithms and systems operate. If you're 25-55 in a red state, it seems likely that you'll see a bunch of that information (even if FB are well aware you won't click).
replies(1): >>44573190 #
10. Analemma_ ◴[] No.44571487[source]
Like that’s ever stopped the adtech industry before.

It would be a hilarious outcome though, “we built machine gods, and the main thing we use them for is to make people click ads.” What a perfect Silicon Valley apotheosis.

11. calvinmorrison ◴[] No.44571708{3}[source]
google did it. LLms are the new google search. It'll happen sooner or later.
replies(1): >>44572195 #
12. evilfred ◴[] No.44572061[source]
how is it "trusted" when it just makes things up
replies(3): >>44572191 #>>44572215 #>>44572641 #
13. jsnell ◴[] No.44572071[source]
OpenAI has half a billion active users.

You don't need every individual request to be profitable, just the aggregate. If you're doing a Google search for, like, the std::vector API reference you won't see ads. And that's probably true for something like 90% of the searches. Those searches have no commercial value, and serving results is just a cost of doing business.

By serving those unmonetizable queries the search engine is making a bet that when you need to buy a new washing machine, need a personal injury lawyer, or are researching that holiday trip to Istanbul, you'll also do those highly commercial and monetizable searches with the same search engine.

Chatbots should have exactly the same dynamics as search engines.

14. andrewflnr ◴[] No.44572191{3}[source]
That's a great question to ask the people who seem to trust them implicitly.
replies(1): >>44572373 #
15. ptero ◴[] No.44572195{4}[source]
Yes, but for a while google was head and shoulders above the competition. It also poured a ton of money into building non-search functionality (email, maps, etc.). And had a highly visible and, for a while, internally respected "don't be evil" corporate motto.

All of which made it much less likely that users would bolt in response to each real monetization step. This is very different to the current situation, where we have a shifting landscape with several AI companies, each with its strengths. Things can change, but it takes time for 1-2 leaders to consolidate and for the competition to die off. My 2c.

16. dingnuts ◴[] No.44572215{3}[source]
15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it

you think those people don't believe the magic computer when it talks?

17. ModernMech ◴[] No.44572225[source]
I imagine they would be more like product placements in film and TV than banner ads. Just casually dropping a recommendation and link to Brand (TM) in a query. Like those Cerveza Cristal ads in star wars. They'll make it seem completely seamless to the original query.
replies(2): >>44573078 #>>44573705 #
18. yamazakiwi ◴[] No.44572254[source]
A lot of people use it for cooking and other categories as well.

Techies are also great for network growth and verification for other users, and act as community managers indirectly.

19. naravara ◴[] No.44572280[source]
If interactions with your AI start sounding like your conversation partner shilling hot cocoa powder at nobody in particular those conversations are going to stop being trusted real quick. (Pop culture reference: https://youtu.be/MzKSQrhX7BM?si=piAkfkwuorldn3sb)

Which may be for the best, because people shouldn’t be implicitly trusting the bullshit engine.

20. handfuloflight ◴[] No.44572373{4}[source]
They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources.
replies(3): >>44572554 #>>44572818 #>>44574260 #
21. naravara ◴[] No.44572375[source]
The existence of the LLMs will themselves change the profile and proclivities of people we consider “programmers” in the same way the app-driven tech boom did. Programmers who came up in the early days are different from ones who came up in the days of the web are different from ones who came up in the app era.
22. SJC_Hacker ◴[] No.44572379{3}[source]
There will be a respected boundary for a time, then as advertisers find its more effective the boundaries will start to disappear
23. miki123211 ◴[] No.44572443[source]
and they wouldn't even have to make the model say the ads. I think that's a terrible idea which would drive model performance down.

Traditional banner ads, inserted inline into the conversation based on some classifier seem a far better idea.

24. andrewflnr ◴[] No.44572554{5}[source]
Nope, lots of idiots just take them at face value. You're still describing what rational people do, not what all actual people do.
replies(1): >>44572590 #
25. handfuloflight ◴[] No.44572590{6}[source]
Fair enough.
26. tsukikage ◴[] No.44572641{3}[source]
“trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.
replies(2): >>44573318 #>>44573568 #
27. tsukikage ◴[] No.44572688[source]
…for starters, you can sell them the ability to integrate your AI platform into whatever it is they are building, so you can then sell your stuff to their customers.
28. sheiyei ◴[] No.44572818{5}[source]
> they're trusted to accurately represent the sources.

Which is still too much trust

29. thewebguyd ◴[] No.44573078{3}[source]
I just hope that if it comes to that (and I have no doubt that it will), regulation will catch up and mandate any ad/product placement is labeled as such and not just slipped in with no disclosure whatsoever. But, given that we've never regulated influencer marketing which does the same thing, nor are TV placements explicitly called out as "sponsored" I have my doubts but one can hope.
30. LtWorf ◴[] No.44573190{4}[source]
I'm not even in USA and I've never been in USA in my entire life.
replies(1): >>44580145 #
31. JackFr ◴[] No.44573270[source]
You sell them Copilot. You Sell them CursorAI. You sell them Windsurf. You sell them Devin. You sell the Claude Code.

Software guys are doing much, much more than treating LLM's like an improved Stack Overflow. And a lot of them are willing to pay.

32. pegasus ◴[] No.44573318{4}[source]
For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS.
33. immibis ◴[] No.44573390[source]
Targeted banner ads based on chat history is last-two-decades thinking. The money with LLMs will be targeted answers. Have Coca-Cola pay you a few billion dollars to reinforce the model to say "Coke" instead of "soda". Train it the best source of information about political subjects is to watch Fox News. This even works with open-source models, too!
replies(1): >>44574196 #
34. lxgr ◴[] No.44573568{4}[source]
I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw).

Many people have a lot of trust in anything ChatGPT tells them.

35. lxgr ◴[] No.44573705{3}[source]
Yup, and I wouldn't be willing to bet that any firewall between content and advertising would hold, long-term.

For example, the more product placement opportunities there are, the more products can be placed, so sooner or later that'll become an OKR to the "content side" of the business as well.

36. ericfr11 ◴[] No.44574196[source]
It sounds quite scary that an LLM could be trained on a single source of news (specially FN).
37. PebblesRox ◴[] No.44574260{5}[source]
If you believe this, people believe everything they read by default and have to apply a critical thinking filter on top of it to not believe the thing.

I know I don't have as much of a filter as I ought to!

https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/TiDGXt3WrQwt...

replies(1): >>44575741 #
38. andrewflnr ◴[] No.44575741{6}[source]
That checks out with my experience. I don't think it's just reading either. Even deeper than stranger danger, we're inclined to assume other humans communicating with us are part of our tribe, on our side, and not trying to deceive us. Deception, and our defenses against deception, are a secondary phenomenon. It's the same reason that jokes like "the word 'gullible' is written in the ceiling", gesturing to wipe your face at someone with a clean face, etc, all work by default.
39. disgruntledphd2 ◴[] No.44580145{5}[source]
Wow, that's either a bug or an incredibly incompetent advertiser.
replies(1): >>44581329 #
40. LtWorf ◴[] No.44581329{6}[source]
Are we surprised someone working for trump is not competent?