Or what if not even distributing it but rather distributing the outputs of the LLM (so closed source LLM like anthropic)
I am genuinely curious as to if there is some gray area that might be exploited by AI companies as I am pretty sure that they don't want to pay 1.5B dollars yet still want to exploit the works of authors. (let's call a spade a spade)
We really are getting at some metaphysical / philosophical questions and maybe we will one day arrive at a question that just can't be answered (I think this is pretty close, right?) and then AI companies would do things freely without being accountable since sure you could take to the courts but how would you come to the decision...?
Another question though
So lets say that the nyt vs openAI case is going on, so in the meantime while they are litigating (lets say), could OpenAI still continue doing the same thing while the case is going on?
EU has copyright exemptions for AI training. You don't need to respect opt outs if you are doing research.
South Korea, Japan has some exemptions too I think?
Singapore has very strong copyright exemptions for AI training. You can completely ignore opt-outs legally, even if doing it commercially.
Just search up "TDM laws globally".