←back to thread

393 points pyman | 1 comments | | HN request time: 0.209s | source
Show context
bgwalter ◴[] No.44490836[source]
Here is how individuals are treated for massive copyright infringement:

https://investors.autodesk.com/news-releases/news-release-de...

replies(8): >>44490942 #>>44491257 #>>44491526 #>>44491536 #>>44491907 #>>44493281 #>>44493918 #>>44493925 #
farceSpherule ◴[] No.44491907[source]
Peterson was copying and selling pirated software.

Come up with a better comparison.

replies(1): >>44491926 #
organsnyder ◴[] No.44491926[source]
Anthropic is selling a service that incorporates these pirated works.
replies(1): >>44492293 #
adolph ◴[] No.44492293[source]
That a service incorporating the authors' works exists is not at issue. The plaintiffs' claims are, as summarized by Alsup:

  First, Authors argue that using works to train Claude’s underlying LLMs 
  was like using works to train any person to read and write, so Authors 
  should be able to exclude Anthropic from this use (Opp. 16). 

  Second, to that last point, Authors further argue that the training was 
  intended to memorize their works’ creative elements — not just their 
  works’ non-protectable ones (Opp. 17).

  Third, Authors next argue that computers nonetheless should not be 
  allowed to do what people do. 
https://media.npr.org/assets/artslife/arts/2025/order.pdf
replies(4): >>44492411 #>>44492758 #>>44492890 #>>44493381 #
1. lawlessone ◴[] No.44492890[source]
> underlying LLMs was like using works to train any person to read and write

I don't think humans learn via backprop or in rounds/batches, our learning is more "online".

If I input text into an LLM it doesn't learn from that unless the creators consciously include that data in the next round of teaching their model.

Humans also don't require samples of every text in history to learn to read and write well.

Hunter S Thompson didn't need to ingest the Harry Potter books to write.