←back to thread

394 points pyman | 1 comments | | HN request time: 0.563s | source
Show context
platunit10 ◴[] No.44492696[source]
Every time an article like this surfaces, it always seems like the majority of tech folks believe that training AI on copyrighted material is NOT fair use, but the legal industry disagrees.

Which of the following are true?

(a) the legal industry is susceptible to influence and corruption

(b) engineers don't understand how to legally interpret legal text

(c) AI tech is new, and judges aren't technically qualified to decide these scenarios

Most likely option is C, as we've seen this pattern many times before.

replies(9): >>44492721 #>>44492755 #>>44492782 #>>44492783 #>>44492932 #>>44493290 #>>44493664 #>>44494318 #>>44494973 #
1. OkayPhysicist ◴[] No.44492932[source]
There's a lot of conflation of "should/shouldn't" and "is/isn't". The comments by tech folk you're alluding to mostly think that it "shouldn't" be fair use, out of concern about the societal consequences, whereas judges are looking at it and saying that it "is" fair use, based on the existing law.

Any reasonable reading of the current state of fair use doctrine makes it obvious that the process between Harry Potter and the Sorcerer's Stone and "A computer program that outputs responses to user prompts about a variety of topics" is wildly transformative, and thus the usage of the copyrighted material is probably covered by fair use.