←back to thread

614 points nickthegreek | 1 comments | | HN request time: 0s | source
Show context
mgreg ◴[] No.39121867[source]
Unsurprising but disappointing none-the-less. Let’s just try to learn from it.

It’s popular in the AI space to claim altruism and openness; OpenAI, Anthropic and xAI (the new Musk one) all have a funky governance structure because they want to be a public good. The challenge is once any of these (or others) start to gain enough traction that they are seen as having a good chance at reaping billions in profits things change.

And it’s not just AI companies and this isn’t new. This is art of human nature and will always be.

We should be putting more emphasis and attention on truly open AI models (open training data, training source code & hyperparameters, model source code, weights) so the benefits of AI accrue to the public and not just a few companies.

[edit - eliminated specific company mentions]

replies(17): >>39122377 #>>39122548 #>>39122564 #>>39122633 #>>39122672 #>>39122681 #>>39122683 #>>39122910 #>>39123084 #>>39123321 #>>39124167 #>>39124930 #>>39125603 #>>39126566 #>>39126621 #>>39127428 #>>39132151 #
1. sirspacey ◴[] No.39126566[source]
Fully agree on open models, but I think there’s more going on that is important to consider in our own founding journies

It’s not just that there are billions to be made (they always believed that) it’s that people are making billions right now turning them into a paper tiger

When only the tech sector cares about a company it’s fairly straightforward for them to be values driven - necessary even. Engineers generally, especially early adopters, are thoughtful & ethical. They also tend to be fact driven in assessing a company’s intentions.

Once a company exits the tech culture bubble, misinformation & political footballs are the game. Defending against it is something every company learns quick. It is existential & the playing field is perpetually unfair.