←back to thread

747 points porridgeraisin | 1 comments | | HN request time: 0s | source
Show context
Deegy ◴[] No.45064530[source]
I guess I'll take the other side of what most are arguing in this thread.

Isn't it a great thing for to us to collectively allow LLM's to train on past conversations? LLM's probably won't get significantly better without this data.

That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

Is the long term solution self custody? Organizations or individuals may use and train models locally in order to protect and distribute their learnings internally. Of course costs have to come down a ridiculous amount for this to be feasible.

replies(8): >>45064563 #>>45064781 #>>45064999 #>>45065881 #>>45066363 #>>45068149 #>>45069438 #>>45072552 #
monsieurbanana ◴[] No.45064563[source]
You mean collectively allow us to train Claude's llm? Pretty big omission there
replies(1): >>45064621 #
Deegy ◴[] No.45064621[source]
I believe I addressed that in my third paragraph?

It does suck that there are only a few companies with enough resources to offer these models. But it's hard to escape the power laws.

I'm hoping that costs come down to the point where these things are basically a commodity with thousands of providers.

replies(1): >>45066543 #
1. monsieurbanana ◴[] No.45066543[source]
Save your prompts, anonymize them and offer them to anyone that wants to train a LLM, that is us collectively training LLMs.

Giving Claude your private data ensures that there will not be thousands of providers, since the limiting factor isn't power but data.