Most active commenters
  • andrewflnr(3)

←back to thread

LLM Inevitabilism

(tomrenner.com)
1612 points SwoopsFromAbove | 12 comments | | HN request time: 0s | source | bottom
Show context
lsy ◴[] No.44568114[source]
I think two things can be true simultaneously:

1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.

2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.

replies(26): >>44568145 #>>44568416 #>>44568799 #>>44569151 #>>44569734 #>>44570520 #>>44570663 #>>44570711 #>>44570870 #>>44571050 #>>44571189 #>>44571513 #>>44571570 #>>44572142 #>>44572326 #>>44572360 #>>44572627 #>>44572898 #>>44573137 #>>44573370 #>>44573406 #>>44574774 #>>44575820 #>>44577486 #>>44577751 #>>44577911 #
alonsonic ◴[] No.44570711[source]
I'm confused with your second point. LLM companies are not making any money from current models? Openai generates 10b USD ARR and has 100M MAUs. Yes they are running at a loss right now but that's because they are racing to improve models. If they stopped today to focus on optimization of their current models to minimize operating cost and monetizing their massive user base you think they don't have a successful business model? People use this tools daily, this is inevitable.
replies(11): >>44570725 #>>44570756 #>>44570760 #>>44570772 #>>44570780 #>>44570853 #>>44570896 #>>44570964 #>>44571007 #>>44571541 #>>44571655 #
lordnacho ◴[] No.44570853[source]
Are you saying they'd be profitable if they didn't pour all the winnings into research?

From where I'm standing, the models are useful as is. If Claude stopped improving today, I would still find use for it. Well worth 4 figures a year IMO.

replies(5): >>44570918 #>>44570925 #>>44570962 #>>44571742 #>>44572421 #
jsnell ◴[] No.44570962[source]
They'd be profitable if they showed ads to their free tier users. They wouldn't even need to be particularly competent at targeting or aggressive with the amount of ads they show, they'd be profitable with 1/10th the ARPU of Meta or Google.

And they would not be incompetent at targeting. If they were to use the chat history for targeting, they might have the most valuable ad targeting data sets ever built.

replies(5): >>44571061 #>>44571136 #>>44572280 #>>44572443 #>>44573390 #
lxgr ◴[] No.44571136{3}[source]
Bolting banner ads onto a technology that can organically weave any concept into a trusted conversation would be incredibly crude.
replies(4): >>44571218 #>>44571487 #>>44572061 #>>44572225 #
1. evilfred ◴[] No.44572061{4}[source]
how is it "trusted" when it just makes things up
replies(3): >>44572191 #>>44572215 #>>44572641 #
2. andrewflnr ◴[] No.44572191[source]
That's a great question to ask the people who seem to trust them implicitly.
replies(1): >>44572373 #
3. dingnuts ◴[] No.44572215[source]
15% of people aren't smart enough to read and follow directions explaining how to fold a trifold brochure, place it in an envelope, seal it, and address it

you think those people don't believe the magic computer when it talks?

4. handfuloflight ◴[] No.44572373[source]
They aren't trusted in a vacuum. They're trusted when grounded in sources and their claims can be traced to sources. And more specifically, they're trusted to accurately represent the sources.
replies(3): >>44572554 #>>44572818 #>>44574260 #
5. andrewflnr ◴[] No.44572554{3}[source]
Nope, lots of idiots just take them at face value. You're still describing what rational people do, not what all actual people do.
replies(1): >>44572590 #
6. handfuloflight ◴[] No.44572590{4}[source]
Fair enough.
7. tsukikage ◴[] No.44572641[source]
“trusted” in computer science does not mean what it means in ordinary speech. It is what you call things you have no choice but to trust, regardless of whether that trust is deserved or not.
replies(2): >>44573318 #>>44573568 #
8. sheiyei ◴[] No.44572818{3}[source]
> they're trusted to accurately represent the sources.

Which is still too much trust

9. pegasus ◴[] No.44573318[source]
For one, it's not like we're at some CS conference, so we're engaging in ordinary speech here, as far as I can tell. For two, "trusted" doesn't have just one meaning, even in the narrower context of CS.
10. lxgr ◴[] No.44573568[source]
I meant it in the ordinary speech sense (which I don't even thing contradicts the "CS sense" fwiw).

Many people have a lot of trust in anything ChatGPT tells them.

11. PebblesRox ◴[] No.44574260{3}[source]
If you believe this, people believe everything they read by default and have to apply a critical thinking filter on top of it to not believe the thing.

I know I don't have as much of a filter as I ought to!

https://www.lesswrong.com/s/pmHZDpak4NeRLLLCw/p/TiDGXt3WrQwt...

replies(1): >>44575741 #
12. andrewflnr ◴[] No.44575741{4}[source]
That checks out with my experience. I don't think it's just reading either. Even deeper than stranger danger, we're inclined to assume other humans communicating with us are part of our tribe, on our side, and not trying to deceive us. Deception, and our defenses against deception, are a secondary phenomenon. It's the same reason that jokes like "the word 'gullible' is written in the ceiling", gesturing to wipe your face at someone with a clean face, etc, all work by default.