←back to thread

614 points nickthegreek | 1 comments | | HN request time: 0.208s | source
Show context
mgreg ◴[] No.39121867[source]
Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

replies(17): >>39122377 #>>39122548 #>>39122564 #>>39122633 #>>39122672 #>>39122681 #>>39122683 #>>39122910 #>>39123084 #>>39123321 #>>39124167 #>>39124930 #>>39125603 #>>39126566 #>>39126621 #>>39127428 #>>39132151 #
yieldcrv ◴[] No.39124930[source]
OpenAI raised $130 million when it was only a non profit and had difficulty doing more, despite the stacked deck and start studded staff and same goal that would value participation units at $100bn

that’s the real lesson here. we can want to redo OpenAI all we want but the people will not use their discretion in funding it until they can make a return

replies(1): >>39133191 #
1. insane_dreamer ◴[] No.39133191[source]
yeah, this was ultimately the problem

it turned out that AI research required $ billions to run the LLMs, something that was not originally anticipated; and the only way to get that kind of money is to sell your future (and your soul) to investors who want to see a substantial return