←back to thread

614 points nickthegreek | 1 comments | | HN request time: 0s | source
Show context
mgreg ◴[] No.39121867[source]
Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

replies(17): >>39122377 #>>39122548 #>>39122564 #>>39122633 #>>39122672 #>>39122681 #>>39122683 #>>39122910 #>>39123084 #>>39123321 #>>39124167 #>>39124930 #>>39125603 #>>39126566 #>>39126621 #>>39127428 #>>39132151 #
digging ◴[] No.39123084[source]
It isn't just money, though. Every leading AI lab is also terrified that another lab will beat them to [impossible-to-specify threshold for AGI], which provides additional incentive to keep their research secret.
replies(1): >>39123246 #
JohnFen ◴[] No.39123246[source]
But isn't that fear of having someone else get there first just a fear that they won't be able to maximize their profit if that happens? Otherwise, why would they be so worried about it?
replies(2): >>39123392 #>>39131709 #
1. digging ◴[] No.39131709[source]
No, it's a fear that the other lab will take over the world. Profit is secondary to that. (Whether or not you or I think that's a reasonable fear is immaterial.)