←back to thread

747 points porridgeraisin | 1 comments | | HN request time: 0.213s | source
Show context
Deegy ◴[] No.45064530[source]
I guess I'll take the other side of what most are arguing in this thread.

Isn't it a great thing for to us to collectively allow LLM's to train on past conversations? LLM's probably won't get significantly better without this data.

That said I do recognize the risk of only a handful of companies being responsible for something as important as the collective knowledge of civilization.

Is the long term solution self custody? Organizations or individuals may use and train models locally in order to protect and distribute their learnings internally. Of course costs have to come down a ridiculous amount for this to be feasible.

replies(8): >>45064563 #>>45064781 #>>45064999 #>>45065881 #>>45066363 #>>45068149 #>>45069438 #>>45072552 #
1. mitthrowaway2 ◴[] No.45069438[source]
I'm okay with LLMs not getting better.