←back to thread

393 points pyman | 1 comments | | HN request time: 0.2s | source
Show context
platunit10 ◴[] No.44492696[source]
Every time an article like this surfaces, it always seems like the majority of tech folks believe that training AI on copyrighted material is NOT fair use, but the legal industry disagrees.

Which of the following are true?

(a) the legal industry is susceptible to influence and corruption

(b) engineers don't understand how to legally interpret legal text

(c) AI tech is new, and judges aren't technically qualified to decide these scenarios

Most likely option is C, as we've seen this pattern many times before.

replies(9): >>44492721 #>>44492755 #>>44492782 #>>44492783 #>>44492932 #>>44493290 #>>44493664 #>>44494318 #>>44494973 #
1. standardUser ◴[] No.44493664[source]
I don't understand at all the resistance to training LLMs on any and all materials available. Then again, I've always viewed piracy as a compatible with markets and a democratizing force upon them. I thought (wrongly?) that this was the widespread progressive/leftist perspective, to err on the side of access to information.