←back to thread

321 points jhunter1016 | 10 comments | | HN request time: 0.001s | source | bottom
Show context
WithinReason ◴[] No.41878604[source]
Does OpenAI have any fundamental advantage beyond brand recognition?
replies(15): >>41878633 #>>41878979 #>>41880635 #>>41880834 #>>41881554 #>>41881647 #>>41881720 #>>41881764 #>>41881926 #>>41882221 #>>41882479 #>>41882695 #>>41883076 #>>41883128 #>>41883207 #
og_kalu ◴[] No.41882479[source]
The ChatGPT site crossed 3B visits last month (For perspective - https://imgur.com/a/hqE7jia). It has been >2B since May this year and >1.5B since March 2023. The Summer slump of last year ? Completely gone.

Gemini and Character AI ? A few hundred million. Claude ? Doesn't even register. And the gap has only been increasing.

So, "just" brand recognition ? That feels like saying Google "just" has brand recognition over Bing.

https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...

replies(6): >>41883104 #>>41883831 #>>41884315 #>>41884501 #>>41884732 #>>41885686 #
_hark ◴[] No.41884315[source]
Claude's API usage absolutely registers (60% the size of OpenAI's), their chat interface just isn't as popular. [1]

[1]: https://www.tanayj.com/p/openai-and-anthropic-revenue-breakd...

replies(1): >>41884383 #
og_kalu ◴[] No.41884383[source]
ChatGPT usage from the main site dwarfs API Usage for both Open AI and Anthropic so we're not really saying different things here.

The vast majority of people using LLMs just use ChatGPT directly. Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper but that doesn't mean they register in the public consciousness.

replies(1): >>41884431 #
1. commandar ◴[] No.41884431[source]
>Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper

If there's an actual business to be found in all this, that's where it's going to be.

The consumer side of this bleeds cash currently and I'm deeply skeptical of enough of the public being convinced to pay subscription fees high enough to cover running costs.

replies(2): >>41884979 #>>41890396 #
2. moralestapia ◴[] No.41884979[source]
No one here gets it, even though @sama has said it countless times.

I will write it explicitly for you once again:

The plan is to make inference so cheap it's negligible.

replies(3): >>41885039 #>>41885653 #>>41887345 #
3. downWidOutaFite ◴[] No.41885039[source]
so... ad funded?
replies(1): >>41885157 #
4. Incipient ◴[] No.41885157{3}[source]
I think they mean running inference. Either more efficient/powerful hardware, or more efficient software.

No one thinks about the cost of a db query any more, but I'm sure people did back in the day (well, I suppose with cloud stuff, now people do need to think about it again haha)

replies(2): >>41885822 #>>41886099 #
5. FuckButtons ◴[] No.41885653[source]
There is no way that running a data center full of any current or prospective offering from nvidia will be anything close to resembling negligible.
6. datadrivenangel ◴[] No.41885822{4}[source]
Anybody with billions of database queries thinks about them.
replies(1): >>41888250 #
7. downWidOutaFite ◴[] No.41886099{4}[source]
nobody is paying for the training so you either pay for the inference or the ads do
8. csomar ◴[] No.41887345[source]
If inference cost is so cheap and negligible, then we'll be able to run the models on an average computer. Which means they have no business model (assuming generosity from Meta to keep publishing llma for free).
9. moralestapia ◴[] No.41888250{5}[source]
Yeah, but GP said one.
10. bossyTeacher ◴[] No.41890396[source]
Especially when Google is good enough for most people. Most people just want information not someone to give them digested info at $x per month. All the fancy letter writing assistants they get for free via the corporate computer that likely has Microsoft Word