←back to thread

614 points nickthegreek | 2 comments | | HN request time: 0s | source
Show context
mgreg ◴[] No.39121867[source]
Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

replies(17): >>39122377 #>>39122548 #>>39122564 #>>39122633 #>>39122672 #>>39122681 #>>39122683 #>>39122910 #>>39123084 #>>39123321 #>>39124167 #>>39124930 #>>39125603 #>>39126566 #>>39126621 #>>39127428 #>>39132151 #
1. bane ◴[] No.39125603[source]
The governance structure is advertising. "trust us, look we're trustable" is intended to convince people to use what they are building.

But the structure is expensive and risky, tossing it aside once traction is made is the plan.

replies(1): >>39125831 #
2. Andrex ◴[] No.39125831[source]
See also this article on the failed social network Ello[1], which also proclaimed a lot of lofty things and also incorporated as a "Public Benefit Corporation."

1. https://news.ycombinator.com/item?id=39043871