←back to thread

747 points porridgeraisin | 1 comments | | HN request time: 0s | source
Show context
Deegy ◴[] No.45064530[source]
I guess I'll take the other side of what most are arguing in this thread.

Isn't it a great thing for to us to collectively allow LLM's to train on past conversations? LLM's probably won't get significantly better without this data.

That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

Is the long term solution self custody? Organizations or individuals may use and train models locally in order to protect and distribute their learnings internally. Of course costs have to come down a ridiculous amount for this to be feasible.

replies(8): >>45064563 #>>45064781 #>>45064999 #>>45065881 #>>45066363 #>>45068149 #>>45069438 #>>45072552 #
1. cowpig ◴[] No.45065881[source]
> That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

It's not just the risk of irresponsible behaviour (which is extremely important in a situation with so much power imbalance)

It's also just the basic properties of monopolistic markets: the smaller the number of producers, the closer the equilibrium price of the good maximizes the producers' economic surplus.

These companies operate for-profit in a market, and so they will naturally trend toward capturing as much value as they can, at the expense of everyone else.

If every business in the world depends on AI, this effectively becomes a tax on all business activity.

This is obviously not in the collective interest.

Of course, this analysis makes simplifying assumptions about the oligopoly. The reality is much worse: the whole system creates an inherent information asymmetry. Try and imagine what the "optimal" pricing strategy is for a product where the producer knows intimate details about every consumer.