←back to thread

114 points dworks | 2 comments | | HN request time: 0.428s | source
Show context
tengbretson ◴[] No.44482169[source]
In the LLM intellectual property paradigm, I think this registers as a solid "Who cares?" level offence.
replies(7): >>44482174 #>>44482176 #>>44482191 #>>44482209 #>>44482275 #>>44482276 #>>44482505 #
didibus ◴[] No.44482191[source]
Ya, the models have stolen everyone's copyrighted intellectual property already. Not sure I have a lot of sympathy, in fact, the more the merrier, if we're going to brush off that they're all trained on copyrighted material, might as well make sure they end up a really cheap, competitive, low margin, accessible commodity.
replies(1): >>44482313 #
1. lambdasquirrel ◴[] No.44482313[source]
Eh... you should read the article. It sounds like a pretty big deal.
replies(1): >>44484179 #
2. didibus ◴[] No.44484179[source]
I did read the article, appart for that it sounds like a terrible place to work, I'm not sure I see what's the big deal?

No one knows how any of the models got made, their training data is kept secret, we don't know what it contains, and so on. I'm also pretty sure a few of the main models poached each others employees which just reimplemented the same training models with some twists.

Most LLMs are also based on initial research papers where most of the discovery and innovation took place.

And in the very end, it's all trained on data that very few people agreed or intended would be used for this purpose, and for which they all won't see a dime.

So why not wrap and rewrap models and resell them, and let it all compete for who offers the cheapest plan or per-token cost?