←back to thread

321 points jhunter1016 | 2 comments | | HN request time: 0.013s | source
Show context
WithinReason ◴[] No.41878604[source]
Does OpenAI have any fundamental advantage beyond brand recognition?
replies(15): >>41878633 #>>41878979 #>>41880635 #>>41880834 #>>41881554 #>>41881647 #>>41881720 #>>41881764 #>>41881926 #>>41882221 #>>41882479 #>>41882695 #>>41883076 #>>41883128 #>>41883207 #
og_kalu ◴[] No.41882479[source]
The ChatGPT site crossed 3B visits last month (For perspective - https://imgur.com/a/hqE7jia). It has been >2B since May this year and >1.5B since March 2023. The Summer slump of last year ? Completely gone.

Gemini and Character AI ? A few hundred million. Claude ? Doesn't even register. And the gap has only been increasing.

So, "just" brand recognition ? That feels like saying Google "just" has brand recognition over Bing.

https://www.similarweb.com/blog/insights/ai-news/chatgpt-top...

replies(6): >>41883104 #>>41883831 #>>41884315 #>>41884501 #>>41884732 #>>41885686 #
_hark ◴[] No.41884315[source]
Claude's API usage absolutely registers (60% the size of OpenAI's), their chat interface just isn't as popular. [1]

[1]: https://www.tanayj.com/p/openai-and-anthropic-revenue-breakd...

replies(1): >>41884383 #
og_kalu ◴[] No.41884383[source]
ChatGPT usage from the main site dwarfs API Usage for both Open AI and Anthropic so we're not really saying different things here.

The vast majority of people using LLMs just use ChatGPT directly. Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper but that doesn't mean they register in the public consciousness.

replies(1): >>41884431 #
commandar ◴[] No.41884431[source]
>Anthropic is doing fine for technical or business customers looking to offer LLM services in a wrapper

If there's an actual business to be found in all this, that's where it's going to be.

The consumer side of this bleeds cash currently and I'm deeply skeptical of enough of the public being convinced to pay subscription fees high enough to cover running costs.

replies(2): >>41884979 #>>41890396 #
moralestapia ◴[] No.41884979[source]
No one here gets it, even though @sama has said it countless times.

I will write it explicitly for you once again:

The plan is to make inference so cheap it's negligible.

replies(3): >>41885039 #>>41885653 #>>41887345 #
downWidOutaFite ◴[] No.41885039{3}[source]
so... ad funded?
replies(1): >>41885157 #
Incipient ◴[] No.41885157{4}[source]
I think they mean running inference. Either more efficient/powerful hardware, or more efficient software.

No one thinks about the cost of a db query any more, but I'm sure people did back in the day (well, I suppose with cloud stuff, now people do need to think about it again haha)

replies(2): >>41885822 #>>41886099 #
1. datadrivenangel ◴[] No.41885822{5}[source]
Anybody with billions of database queries thinks about them.
replies(1): >>41888250 #
2. moralestapia ◴[] No.41888250[source]
Yeah, but GP said one.