Copyright in its current form is ridiculous, but I support some (much-pared-back) version of copyright that limits rights further, expands fair use, repeals the DMCA, and reduces the copyright term to something on the order of 15-20 years (perhaps with a renewal option as with patents).
I've released a lot of software under the GPL, and the GPL in its current form couldn't exist without copyright.
What copyright should do is protect individual creators, not corporations. And it should protect them even if their work is mixed through complex statistical algorithms such as LLMs.
LLMs wouldn't be possible without _trillions_ of hours of work by people writing books, code, music, etc. they are trained on. The _millions_ of hours of work spent on the training algorithm itself, the chat interface, the scraping scripts, etc. is barely a drop in the bucket.
There is 0 reason the people who spent mere millions of hours of work should get all the reward without giving anything to the rest of the world who put in trillions of hours.
Your point remains, but the problem of the division of responsibility and financial credit doesn't go away with that alone. Do you know if the openAI lawsuits have laid this out?
With code, some licenses are compatible, for example you could take a model trained on GPL and MIT code, and use it to produce GPL code. (The resulting model would _of course_ also be a derivative work licensed under the GPL.) That satisfies the biggest elephant in the room - giving users their rights to inspect and modify the code. Giving credit to individual authors is more difficult though.
I haven't been following the lawsuits much, I am powerless to influence them and having written my fair share of GPL and AGPL code, this whole LLM thing feels like being spat in the face.