←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0s | source
Show context
WithinReason ◴[] No.41878604[source]
Does OpenAI have any fundamental advantage beyond brand recognition?
replies(15): >>41878633 #>>41878979 #>>41880635 #>>41880834 #>>41881554 #>>41881647 #>>41881720 #>>41881764 #>>41881926 #>>41882221 #>>41882479 #>>41882695 #>>41883076 #>>41883128 #>>41883207 #
idunnoman1222 ◴[] No.41881647[source]
Yes, they already collected all the data. The same data has had walls put up around it
replies(4): >>41881678 #>>41882077 #>>41882200 #>>41882333 #
lolinder ◴[] No.41882333[source]
That gives the people who've already started an advantage over newcomers, but it's not a unique advantage to OpenAI.

The question really should be what if anything gives OpenAI an advantage over Anthropic, Google, Meta, or Amazon? There are at least four players intent on eating OpenAI's market share who already have models in the same ballpark as OpenAI. Is there any reason to suppose that OpenAI keeps the lead for long?

replies(1): >>41882694 #
XenophileJKO ◴[] No.41882694[source]
I think their current advantage is willingness to risk public usage of frontier technology. This has been and I predict will be their unique dynamic. It forced the entire market to react, but they are still reacting reluctantly. I just played with Gemini this morning for example and it won't make an image with a person in it at all. I think that is all you need to know about most of the competition.
replies(1): >>41882907 #
lolinder ◴[] No.41882907[source]
How about Anthropic?
replies(3): >>41883041 #>>41883457 #>>41884282 #
jazzyjackson ◴[] No.41883041[source]
Aren't they essentially run by safetyists? So they would be less willing to release a model that pushes the boundaries of capability and agency
replies(1): >>41884000 #
caeril ◴[] No.41884000{3}[source]
From what I've seen, Claude Sonnet 3.5 is decidedly less "safe" than GPT-4o, by the relatively new politicized understanding of "safety".

Anthropic takes safety to mean "let's not teach people how to build thermite bombs, engineer grey goo nanobots, or genome-targeted viruses", which is the traditional futurist concern with AI safety.

OpenAI and Google safety teams are far more concerned with revising history, protecting egos, and coddling the precious feelings of their users. As long as no fee-fees are hurt, it's full speed ahead to paperclip maximization.

replies(2): >>41884407 #>>41885488 #
1. walleeee ◴[] No.41884407{4}[source]
Not to dispute your particular comment, which I think is right, but it's worth pointing out we're full steam ahead on paperclips regardless of any AI company. This has been true for some 300 years, longer depending how flexible we are with definitions and where we locate inflection points